My question: Does "top" report the real CPU load, or could the network be maxed out and top still report the CPU load as 100%? I have a process that has 192 cores to do the number crunching and 64 cores to do the database work. Top says the CPU's are maxed out on the database server, so I optimized the query and took the read load off it but the process has not speeded up and the DB is still maxed out. I suspect the network bandwidth, but would have expected this to show up as <100% CPU load. Also, any recommendations for network bandwidth monitoring? [EDIT] This is on Fedora running mysql.
How do I open up a multi-gigabyte text file, or at least chop it up into files that won't crash any text editor I try?
One text editor that would open it would vi (though you may not easily be able to do what you want). It is a unix tool, I do not know how to get it too open a file on a windows system, cygwin may do, as may something
here but you may have to install a virtual box. Considering the questions you come up with here, I think you would benefit from that.
The way I would do it would be with code. For example, you could
install perl and copy some of the examples
here. It is a fairly long winded way to solve your problem, but again considering the questions you come up with here, I think you would benefit from the learning process.
[EDIT] Another way, using unix tools, would be something like:
wc -l filename.txt # get number of lines in the file
2000000 # say 2 million
head -50000 filename.txt > file1.txt # get the first 50k lines into new file
head -100000 filename.txt | tail -50000 > file2.txt # get the next 50k lines into new file
etc... if you want many of these you could generate the lines in excel or something.
How can a text file be an entire gigabyte large, let alone multiples?
I get them all the time. Web server logs when someone forgets to take out debugging info is an example.