A couple of (very basic) questions about computer code

Ok, shamelessly hijacking the thread for a moment :blush:

I recall reading about a DNA based calculator that could be programmed with a gajillion different inputs, but the desired solution to the question would be a specific sequence in the output. Due to recent advances in DNA PCR and such it turns out to be very easy to generate a huge batch of varying input strands and simply dump them into a test tube - shaky shaky, fitler and clean, and then you can examine the resulting "answer" that cluumped to your target answer.

Am I barking mad here, or does that ring a computational bell?

You're not mad, here is an example:
http://www.nature.com/nature/journal/v429/n6990/full/nature02551.html

The idea of DNA based computing is that DNA can encode information very densely. So you can put a lot of different combinations into a small space and try them all out. But this is still a classical algorithm and the amount of information scales linearly with the amount of DNA. With a quantum computer, the complexity scales exponentially with the number of input qubits, so for the problems that a quantum computer is good at, a sufficiently large version of it would still beat any DNA based computer (as far as we know).

Actually, the DNA based computer is very much on topic, since it is a good example of a system that does not use binary encoding at the fundamental level. If I correctly remember my biology lessons, it has four different ways the bases can be arranged so it uses a quaternary encoding instead of binary.
 
Yes! That wasn't the exact experiment that I was remembering, but it uses the same principles. I remember being blow away when I read about it.

This example is actually more interesting because they've designed a specific circuit to test for target genes.

And Yes, I'm fairly certain you're right about the base-4 elements of DNA. So not binary, but quaternary.
 
Yes! That wasn't the exact experiment that I was remembering, but it uses the same principles. I remember being blow away when I read about it.

This example is actually more interesting because they've designed a specific circuit to test for target genes.

And Yes, I'm fairly certain you're right about the base-4 elements of DNA. So not binary, but quaternary.

A small question:

Although this is not really needed in anything i am thinking of:

-Would a system which uses 4 distinct symbols (ie a quaternary sustem) be realistically as efficient as a system that uses two, for the purpose of computers?
I ask this because unlike a system with 2 symbols, the system of 4 symbols does not seem to contain 2 as any major part of it (the system of two symbols has to be assumed to already include the square root of two, since it contains 2 as its numeral edge as well as a symbol inside it, much like our system of 9 digits (1 to 9, 0 is not a digit exactly) contains 9 as its edge, but also has a middle point (5). A system of 4 symbols neither contains 2 as its edge, nor does it have a middle point being an integer. This seems to me to be not really computable by anything outside of that system (eg our dna). Thus (if my assumption is closer to being correct than not) it would follow that we cannot effectively use it as part of computers, since the square root of 2 would not be there, and neither would a border between the digits. 2 is, of course, not just any number, but one of huge importance (one need only have a peripheral view of how many important equations set 2 as either their base, power, or other factors of note in them), like 1, 0, and in our own system also 5 and 9(which includes 3).
2 would be, in a system of 4 symbols, not a border of all symbols, but the sum of any parts of equal numbers of digits in that system, and on the other hand there would only be the following such parts (in the case that the same digits can be used in all first connections between them, ie the first connections are independent of the next ones: 1,2 and 1,3 and 1,4 and 2,3 and 2,4 and 3,4. In the case that the first connections are not independent of the first ones, the used up in pairs symbols may even be used up alltogether in any such pairing: 1,2 and 3,4. There is also a middle status to that, and a number of other arrangements like using the same symbol in a pair of itself, while keeping the second part of the pair seperate from the first. Still they are of a different number and manner than either the binary system, or our 9-digit system). So i am not at all sure if the implications of using that system are mapped out to a degree that moving on from a binary one to it (or even translating the 9-digit system we have since ancient times to it) would be beneficial, even in the longer run.

That dna, obviously, in some manner computes itself in a sort of system of 4 symbols, is great reason to think it is a highly efficient system. Dna is not a human being though. Like we cannot compute how much blood exactly needs to be sent by our body to an organ of it so as to be optimaly beneficial to that, whereas the system of the body can, i have to guess we can neither compute more efficiently with a 4-symbol system than we can now.
 
-Would a system which uses 4 distinct symbols (ie a quaternary sustem) be realistically as efficient as a system that uses two, for the purpose of computers?
That all depends on the task for which you've designed your computer.
If you need a hammer you should not try to use a sewing needle, right?

As to the rest of your post, I have absolutely no idea what you're trying to say.
 
No. I don't understand the question either. Kyriakos seems to be talking about various base systems. But I don't know what he's saying with this edge business, or why there's any significance in there being a "middle".

To me, binary is just a the way it is. You can compute in base 4 (given four symbols), quite easily. But I don't see it gives you any real advantage over binary. Which translates to base 4, 8 or 16 with no effort at all.

Why zero doesn't count as a digit, I've no idea. Certainly it's not a member of the natural numbers, but I don't know what that's got to do with it.
 
I don't think it'd be as efficient. You'd essentially need to layer it overtop a binary system anyway, since you'll want to be using binary gates. Even if you don't want to use binary gates - any gates taking in more than 2 inputs can be built using binary gates. But I didn't really think about this too long, so I could be missing something.

I don't understand the rest of the post either.
 
My post can still be wrong regardless, but its point can be cleared up a bit more, sure:

By "edge" of a system of main numbers (eg 1,2,3,4,5,6,7,8,9) i mean the final numeral before any repetition.
I did not include 0 in that, for two reasons:

a) Zero did not exist in that sense in the ancient Greek numeral system, since that system started with 1, went up to 9, and then used a unique symbol for 10 and every 10 numbers another one, ie in total 9 unique symbols by then. Then a unique symbol for 100 (and so on). It still is (potentially) perfectly compatible with the arabian numeral system, since both the ancient greek one and the arabian only have 9 different digits (the greek one also having 9 different digits for each rise in each set of different orders of magnitude, but still 9 for all those too, AND the unique symbol for 10 or any later unique symbol is always contructed by the 9 digits, eg the unique symbol for "10" is still 9+1 in the ancient greek numerals as well), since 0 is never the same when preceded by another digit, however any other number including zero which is not just zero can still be perfectly written in the greek numeral system.
That it is perfectly compatible with the arabian system used now, is evident, else there would have been a need already in medieval islam to considerably adapt virtually every theorem already in existence up to them. We still use those same, unaltered, theorems though, so there just was no need for that at all. (to say that the mere mention that something cannot happen if the crucial number used is 0, is not a significant adaption in regards to the older theorem, given the older theorem did not take 0 into account anyway).

b) zero seems to be a concept in a way similar as for the reason it rose that the concept of the imaginary number (i) rose: namely such numbers were shown to be needed so as to fix holes in later algebra, which could not be fixed and thus have algebra maintained if either 0 or an imaginary number i the square of which was -1, never existed.

So, my point was just that if a numeral system has neither a center (123456789) or has 2 as its edge (12) then it seems likely that the square root of 2 does not have any equal meaning in that system that it has in either our 9-based one, or in the binary one. In turn, this means that anything related to the square root of 2, or potentially even 2 itself (and thus maybe even 1/2) will be difficult to adapt to in that 4-symbol system.

PS: do not ask me to give a detailed account of why the square root of 2 is important. Same reason why 2 or 1/2 is. Virtually every important equation has that number, from the Mersene primes (2 as a base) to the Riemann Zeta function (1/2 possibly being crucial to the hypothesis itself).

PS2: The idea of the middle of a system of numbers has obvious importance to the sets leading anywhere from infinity to a specific point.

Finally, sorry if i gave the impression i am posting something others do not know- cause i am definitely not. I am, however, posting something which in my view is rather established, except from the (hypothesised) problems which may arise with a 4-digit system. My earlier post was pretty much just that, some general reasons given for the hypothesis a 4-symbol system will be unpractical for us and thus computers we are using.
 
I don't think it'd be as efficient. You'd essentially need to layer it overtop a binary system anyway, since you'll want to be using binary gates. Even if you don't want to use binary gates - any gates taking in more than 2 inputs can be built using binary gates. But I didn't really think about this too long, so I could be missing something.
But that all depends on the design of the computer - you're assuming binary gates. If the architecture is binary, then it follows that a quaternary language would have to be parsed to a binary assembly code. But if the architecture were natively 4-fold (like DNA), then there's no reason to think that certain algorithms would be less 'efficient' if executed within that system as opposed to transposing them down to binary and executing on a traditional transistor chip.

So, my point was just that if a numeral system has neither a center (123456789) or has 2 as its edge (12) then it seems likely that the square root of 2 does not have any equal meaning in that system that it has in either our 9-based one, or in the binary one. In turn, this means that anything related to the square of 2, or potentially even 2 itself (and thus maybe even 1/2) will be difficult to adapt to in that 4-symbol system.

PS: do not ask me to give a detailed account of why the square root of 2 is important. Same reason why 2 or 1/2 is. Virtually every important equation has that number, from the Mersene primes (2 as a base) to the Riemann Zeta function (1/2 possibly being crucial to the hypothesis itself).
The number system doesn't matter - 2^-1 is a value of equal utility across bases. If you think it's more important because most humans use base10, or computers are constrcuted in base2, you're wrong. Base systems are independent of mathematical entities. So i is the same value whether you're working in binary, natural logs, or base17.

Really, if you're interested in this sort of stuff, we can recommend quite a few books that would help you understand some basic number theory principles and introductory computer theory material. As it stands, well, a little knowledge is a dangerous thing ;)
 
Base systems are independent of mathematical entities. So i is the same value whether you're working in binary, natural logs, or base17.

Really, if you're interested in this sort of stuff [...]

I would love to read anything which proves the statement of yours i highlighted. Indeed it would change my view on this, and if true help me see the number systems in a better way.

So please post it if it does exist.

For what it is worth, googling your phrase there instantly produced this:

http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis

And from the above you get:

wikimaththeory said:
Tegmark's mathematical universe hypothesis (MUH) is: Our external physical reality is a mathematical structure. That is, the physical universe is mathematics in a well-defined sense, and "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world".[2][3] The hypothesis suggests that worlds corresponding to different sets of initial conditions, physical constants, or altogether different equations may be considered equally real. Tegmark elaborates the MUH into the Computable Universe Hypothesis (CUH), which posits that all computable mathematical structures exist.

From which i can deduce some things:

1)Given that the above statement is a hypothesis, it obviously has no proof (at least yet).

2)The above hypothesis only speaks of all computable math structures being existent, but does not at all include the claim that all would be as evidently existent for all types of observing beings of them.

3)From the second deduction it follows that some math structures will be possibly impossible to note by all being observers, and thus that some being observers will note less than some others. The hypothesis itself is based on mathematics being "universal" (ie of the whole universe) in the first place, so that would mean some observer beings won't discover some math structures at all, no matter how they develop their means to (part of the means being the base system or systems).

4)From the third deduction it follows that they won't discover them, regardless of what base system they use. From which it immediately follows that they either will discover all they can by using n numbers of base systems, or a fraction of n. By which, finally, it seems to follow that any major translation from one set base system to more will inevitably lead to losing information, after a point in which it becomes efficient to keep all those base systems for comparisson reasons which give rise to new math structures (as in one afforementioned case of the binary systems unique rounding up feature due to the limited number of decimals allowed).

So my point is on the one hand that even if one can examine the same (or more) math structures using more base systems, this will tend to flat out at some point, OR be so exponentially complicated to maintain that it will practically flat out for the reason of the difficulty in noting the relationships between the by then numerous base systems pretty much becomming even greater than the one we would face with 1 or 2 base systems.
In conclusion i just heavily doubt that the part of your reply i stressed is both true and easy to prove. It may still be true but not easy to prove, or be false. But feel free to (easily) prove otherwise :D
 
If the base mattered there wouldn't be any easy translation between bases. There is such an easy translation. Therefore bases don't matter.

Of course, if you're going to count anything it has to be in some base. But it makes no difference which one.
 
But that all depends on the design of the computer - you're assuming binary gates. If the architecture is binary, then it follows that a quaternary language would have to be parsed to a binary assembly code. But if the architecture were natively 4-fold (like DNA), then there's no reason to think that certain algorithms would be less 'efficient' if executed within that system as opposed to transposing them down to binary and executing on a traditional transistor chip.

I would even argue that if the costs of binary and quaternary gates were the same, the quaternary approach would be more efficient as you could use less gates for the same mathematical operation.
 
If the base mattered there wouldn't be any easy translation between bases. There is such an easy translation. Therefore bases don't matter.

Of course, if you're going to count anything it has to be in some base. But it makes no difference which one.

This may well hold true IF you are defining the base by means of having it from the start be linked to our own (9 or decimal base system).

Remember how this entire argument started? It was about dna-reliant computers. A computer based on some other "thing" that calculates in a base, is not the same as a computer based on binary, a base developed by us, in a way so that our "regular" number system can be translated more or less in a stable way in it. A computer based on dna working as easily/well as one on binary seems to me to be quite a lot like saying "i can speak one or two english dialects, so i surely can efficiently learn to speak the 'language' used by whales too".
The leap is way too vast.
 
I'll take your word for it. Though truthfully I don't understand at all.

As far as I can tell whether the logic is based on binary hardware or human wetware doesn't seem have any bearing on the result. So I wouldn't expect a dna based computer to be functionally different.

edit: not of course that human wetware functions like either a binary computer or a dna based one.
 
I just hypothesise that there will be a huge difference between the two in regards to easyness of translating our currently used base systems (created by us) to the dna-supported one, exactly because in the case of the latter you are using something already having its own "base system" so to speak. Which is why i likened it to knowing some human dialects, or languages, and trying to learn the language of a whale based on that.
To flesh out why i think so: if we create, ourselves, a system based on 4 symbols, it would not be the same as using a system that already has its own 4-symbol base (dna), cause the latter actually works by itself, whereas the former was a system devised by us, us having to quality of being seperate from the actual system by and large.
 
But that all depends on the design of the computer - you're assuming binary gates. If the architecture is binary, then it follows that a quaternary language would have to be parsed to a binary assembly code. But if the architecture were natively 4-fold (like DNA), then there's no reason to think that certain algorithms would be less 'efficient' if executed within that system as opposed to transposing them down to binary and executing on a traditional transistor chip.

The thing is that any gate taking in 4 inputs (0,1,2,3) can be designed using gates taking in only 2 inputs (0,1). So you're just creating overhead by introducing more inputs, making the whole system less efficient as a result.

Here's a guy breaking down the benefits of binary circuitry, a guy that seems to know his stuff. (look at the first answer)
 
I just hypothesise that there will be a huge difference between the two in regards to easyness of translating our currently used base systems (created by us) to the dna-supported one, exactly because in the case of the latter you are using something already having its own "base system" so to speak. Which is why i likened it to knowing some human dialects, or languages, and trying to learn the language of a whale based on that.
To flesh out why i think so: if we create, ourselves, a system based on 4 symbols, it would not be the same as using a system that already has its own 4-symbol base (dna), cause the latter actually works by itself, whereas the former was a system devised by us, us having to quality of being seperate from the actual system by and large.

No. If you can write down an isomorphism for two structures, then they will behave the same way mathematically. You can write down such an isomorphism for binary and quaternary system, so they are equivalent.

The comparison to languages fails, because there are no isomorphisms in languages (which makes translating such a difficult job)

The thing is that any gate taking in 4 inputs (0,1,2,3) can be designed using gates taking in only 2 inputs (0,1). So you're just creating overhead by introducing more inputs, making the whole system less efficient as a result.

In the quaternary system you can use less gates to accomplish the same result. So if the gates had the same properties, the quaternary system would be smaller and faster.

Here's a guy breaking down the benefits of binary circuitry, a guy that seems to know his stuff. (look at the first answer)[/QUOTE]

That argument boils down to: With semiconductors binary gates much easier and cheaper to make than other gates, so we use them to build more complicated gates. which is true for semiconductors. But might not be true for other systems like DNA, where you would have to throw information away to make a binary gate.
 
Yeah. Tannebaun is good.

But why does that answer in your link refer to binary asthmatic?

Is a binary asthmatic someone who can either breathe or not?
 
No. If you can write down an isomorphism for two structures, then they will behave the same way mathematically. You can write down such an isomorphism for binary and quaternary system, so they are equivalent.

The comparison to languages fails, because there are no isomorphisms in languages (which makes translating such a difficult job)

I am not speaking of cups and doughnuts though, or even (as i stressed out already) about a binary base system developed by us, and a quaternary base system developed by us. The difference here is that the computer will be working with dna, not some system simulating dna. Do you think both are the same? I heavily doubt they are the same at all, given that dna works by itself not just because someone planted a quaternary system in it, but because it is itself the quaternary system it uses. The computer is not its own developer or user. We are not the computer. This is why i claimed this may not work well.
 
I would love to read anything which proves the statement of yours i highlighted. Indeed it would change my view on this, and if true help me see the number systems in a better way.

So please post it if it does exist.
I have no idea if that's a proven mathematical statement, and I wouldn't really know how to find out. My claim that "the base doesn't matter yet the mathematical entity transcends whatever base you choose to represent it" can be easily demonstrated with, I'm sure, thousands of simple examples. I'll do one:

Consider, pi =
3.1415926535897... Base10
3.1241881240744... Base9
3.1103755242102... Base8
3.0663651432036... Base7
3.0503300514151... Base6
3.0323221430334... Base5
3.0210033312222... Base4
10.010211012222... Base3
11.001001000011... Base2
3.243f6a8885a3... Base16

Each and every one of these apparently different numerical sequences precisely express the mathematical entity pi.

The base you choose to express the value of the mathematical entity doesn't matter - the ratio of a circle's circumference to it's diameter is a mathematical result of this universe. That ratio doesn't change if you're measuring it using metric versus Imperial units to - and so either does the base you choose to measure it in affect the ratio.

There are no numbers that only exist in one base or another.*

For what it is worth, googling your phrase there instantly produced this:

http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis
Interesting. Never heard of that before.


So my point is on the one hand that even if one can examine the same (or more) math structures using more base systems, this will tend to flat out at some point, OR be so exponentially complicated to maintain that it will practically flat out for the reason of the difficulty in noting the relationships between the by then numerous base systems pretty much becomming even greater than the one we would face with 1 or 2 base systems.
Not sure what you're really saying here, but you should really ask a mathematician about this stuff. I've just read a few books, nothing more.

In conclusion i just heavily doubt that the part of your reply i stressed is both true and easy to prove. It may still be true but not easy to prove, or be false. But feel free to (easily) prove otherwise :D
You really need to read some books on mathematics. You obviously enjoy thinking about these things, so you should try to become familiar with them :)

Here are a couple that I've read that you might find useful:
http://www.goodreads.com/book/show/208916.The_Music_of_the_Primes
http://www.goodreads.com/book/show/116623.Decoding_the_Universe
http://www.goodreads.com/book/show/2502178.One_to_Nine

I'm sure there are plenty more that would be even better, but those 3 popped into my mind immediately.


*I totally just made this up, but it sounds right ;)


EDIT: Ok, a little wikipedia reading brought me to this article on RADIX ECONOMY - totally relevant to the discussion of computational base choice:
https://en.wikipedia.org/wiki/Radix_economy
...and also:
http://mathworld.wolfram.com/Base.html
 
That argument boils down to: With semiconductors binary gates much easier and cheaper to make than other gates, so we use them to build more complicated gates. which is true for semiconductors. But might not be true for other systems like DNA, where you would have to throw information away to make a binary gate.

Are these 'gates' that take in 4 inputs as powerful as binary gates though, in the sense that you can build any function you can think of with them?

Cause that's another good thing about binary gates - you can build anything using just 1 type of gate - a NAND gate. That helps reduce costs because you essentially have everything built out of 1 type of building block.
 
Back
Top Bottom