Not the limit of the infinite sum, but the limit of the finite sums. There's no difference between the limit of the finite partial sums and "the infinite sum". The latter is defined to be the former.
Why it's so? Because originally you have just a sum of two objects. You can extend it to more recursively, for example you can define a+b+c = (a+b) + c. That is, first sum a and b and then sum the result with c. This way you can define sum for all finite number of summands, but not for infinite. That's why the infinite sum requires a different definition. Notice also that it's different from a finite sum in many other ways too: finite sums are always defined, infinite not necessarily. Finite sums can be summed in any order, infinite sums not.
You can even go further and define sum for an uncountable amount of numbers, but need a different definition for that too.
EDIT: Forgot the second question. I don't understand what you're exactly getting at. It's easy to show that there's no biggest number in ]0,1[ for example: By definition ]0,1[ = { real x: 0<x<1}. If there would be the biggest number in ]0,1[, call it a, these would be true:
a < 1
For every x in ]0,1[ x < a.
Now, choose b = (a+1)/2, i.e. the average of a and 1. It's easy to see that a<b<1, so b is in ]0,1[ and on the other hand b>a. That is a contradiction with a being the biggest element of ]0,1[.
Besides, if an open interval included it's endpoints, how would it differ from the closed interval? In no way. It would be exactly the same thing. The difference between them is however huge, as you can see already in the elementary maths, for example a continuous function is bounded and reaches it's maximum and minimum on a closed interval, but not necessarily on an open one.