I would call such a number "1 - 0.999999...". It's not like you've proven that all real numbers can be expressed in decimal notation.
If 0.999... = 1 is a question of notation. To be useful, we can define it as a limit, in which case it is true. However, if we do not define it as a limit, then the equality is false. For instance we can define it as "the sum of {for all N|9*10^-N}". Since this definition is not a limit, this infinite sum does not quite reach 1. Each part in the sum is one step closer to 1, but at no point does it quite reach 1. You can't even say that at infinity the sum reaches 1, because all talk about something happening at infinity implicitly means it's limiting behavior; infinity isn't a real number and it is only for convenience that we sometimes treat it like one.
Note that this isn't special to 0.999..., but applies to all repeating decimals. 0.111..., 0.010101...., 0.0055555.... ext.
But 0.9999... not defined as a limit is never useful, therefore we may as well define it as a limit and therefore equal to 1. This allows us to do useful things like write any rational number in the form:
Code:
[FONT=System][FONT="Lucida Console"] _
a.bc[/FONT][/font]
On the other hand, not defining 0.99999... as a limit, means that 0.999... is it's own distinct real number. Though not rational, it does have a place on the number line. Like any real number except 1 itself, there are an infinite number of other numbers between it and 1. The number comes more naturally from the notation, since nothing in the notation speaks "limit", and since doing so makes every decimal notation number unique.