Is it better to leave your computer on all night or to turn it off?

Don't take my story as proof that you can, but my laptop (which I got Spring 2006) has been on maybe 80% of the time or more since I've gotten it, and I have yet to experience any real decrease in performance.
 
Keeping it on means you can run things like antivirus in the middle of the night without it getting in your way. On the other hand, Windows tends to get unstable and slow if you don't reboot every once in a while.

But power is a greater factor if you are paying your own electric bill.
 
Creative Labs had a driver from their site but of course it didnt work. Wonder if they did it to get people to buy more audio cards.

Creative have shot themselves in the foot. It isn't as blatant as trying to pass dud cards so people need another. I believe they are afraid to get assistance in drivers and bug tuning in case someone steals their ideas.
 
Only old versions of Windows, my Vista rig does fine with a monhly reboot to install updates.
I've had notable differences with XP. I think a large part of it is just freeing the memory of background process.

I haven't been using Vista long enough to judge.
 
Till, Mulholland:

In semiconductors conductivity is (roughly) directly proportional to temperature, so your explanation makes no sense.
 
I always turn it off it I'm not going to be using it for over 3 hours. Anything less I just let it idle.
 
This is (sort of) true, but the circuit configuration can overcome this (and often does).

From my experience this is not true. There are noticeable difference in total current sizes, and they are directly proportional to temperature, not inversely.
 
From my experience this is not true. There are noticeable difference in total current sizes, and they are directly proportional to temperature, not inversely.

It is true that the beta of a transistor does vary in direct proportion to the junction temperature. This causes an increase in the collector current in a standard circuit. It is regenerative and causes a condition known as thermal runaway.

If the emitter resistance is not swamped, it creates a degenerative condition that adds stabilisation to the current.

Secondly, as the heatsink activity is relative to the differential between the case and ambient temperatures, it has a stabilising effect of its own as the device gets hotter.

And this, without using a second transistor which could bring current drift down to miniscule proportions.
 
It is true that the beta of a transistor does vary in direct proportion to the junction temperature. This causes an increase in the collector current in a standard circuit. It is regenerative and causes a condition known as thermal runaway.

If the emitter resistance is not swamped, it creates a degenerative condition that adds stabilisation to the current.

Secondly, as the heatsink activity is relative to the differential between the case and ambient temperatures, it has a stabilising effect of its own as the device gets hotter.

And this, without using a second transistor which could bring current drift down to miniscule proportions.

This debate has officially gone beyond my basic understanding of electricity. @ Eli and Lndm, are you guys electrical engineers?
 
lndm: Your point?

Eli, simply that engineers can choose not to let the nature of the beast dominate.

Take CMOS, for example, the topology of choice in logic circuits. Two fets of opposite polarity are in series and across the rails. the input locks the output either up or down. One device drops all the voltage and displays a very high impedance. No current flows.

The only time current flows in a CMOS circuit is when the states are changing and neither device is in its high impedance state. This occurs for a short period of time, perhaps nanoseconds.

This is why processors get hot while they're working, and they cool down when they're idle.
 
Your computer consists mostly of semiconductors.
That doesn't mean for all components the effect of semiconductors dominates.

Really though, I'm playing devil's advocate. I don't think these sorts of things have significant effects on wear, other then possible thermal expansion and contraction.

If you blow up a computer part that wasn't the bottom of the barrel cheapest thing you could find, it's probably your fault for not providing adequate cooling, spilling stuff on it, shaking it, etc.
 
significant effects on wear, other then possible thermal expansion and contraction.

Yes, if it wasn't for this expansion and contraction, and assuming the devices remain hermetically sealed and are not subjected to electrical conditions beyond their capabilities, they will function indefinately.
 
lndm:

Again, what's your point? Where do we disagree?

Besides, if we've moved to CMOS architecture, then at the first application of voltage you see large inrush currents due to the initial charging of the capacitances and the initialization switchings. You don't have to go as far as processors to see that. It happens in every CPLD/FPGA. This current is usually orders of magnitude greater than the all the other currents in the circuit. AFAIK, the two main factors which set the size of this current, capacitance and ESR, do not change strongly enough in temperature for it to have a difference.

This debate has officially gone beyond my basic understanding of electricity. @ Eli and Lndm, are you guys electrical engineers?

I am.
 
Back
Top Bottom