calinb wrote:I booted antiX-base system from CD on my old PII and it worked.
Wow! I'm impressed! Glad to hear it.
X didn't boot [...]
No surprise.
I added"3" to the option line for the next boot and it worked.
Yes, that should work (unless something is broken). No need to race to login. I usually boot to runlevel 3. If I want to start X I run"telinit 5". I suggest using the cheats:
If it says"fdb+" is unrecognised then use"db+" instead. This will automatically log you in as root on vt2 -- vt4. The fdb+ version will also switch you to vt2 ("f" for fast) and it won't pause when the system shuts down (unless it is a live-cd/dvd). In addition you get a fancy Bash prompt which you can customize. Use"prompt-usage" for info. In antiX-17 we also tie the page-up and page-down keys in Bash to history search (by uncommenting the history-search lines in /etc/inputrc).
Then I'll try a frugal install but initial indications are that it should work. Thanks!
Excellent!
The frugal system should be faster than an installed system unless you add a lot of stuff to the file system and start running low on RAM. Only accessing the compressed linuxfs file enhances performance by reducing seek times. If you can get a network connection with antiX-core, you should be able to add the wireless drivers you need and then remaster. You could also purge the stuff you don't need. I recommend using gzip compression for the new linuxfs file. It will be about 20% bigger but it should be a little faster. OTOH, I imagine antiX-base will be fine and the performance optimization of starting with base may not be noticeable.
Too many notes:
The compressed linux kernel loads faster than the uncompressed linux kernel because decompression is fast and can keep pace with reading data from the disk. There is a limit to this though. We use gzip compression for the initrd, but use xz for the linuxfs. You can notice notice the extra delay when decompressing an xz initrd versus a gzip initrd. The space savings for using xz on the initrd are small (20% x 4Meg = 800K). OTOH, we get significant space savings using xz for the linuxfs. It is slower than using gzip but the space savings are worth it, roughly 20%. We try to take this into account when estimating sizes before we rebuild the linuxfs file in live-remaster.
On Gentoo for many years I used xfs with the smallest block size possible for /usr/portage/ and got much better performance which I believe was due to reduced seek times. Seek times are orders of magnitude longer than most other time scales that affect performance. They have not changed drastically in the last 30 years, going down only a factor of 3 or 4. OTOH the performance of CPUs has been totally transformed in that timespan. A lot of this was due to the introduction of SSE instructions (which is the difference between Pentium-II and Pentium-III). The MFLOPS of laptop I bought 10 years ago were better than the MFLOPS of
========= SCRAPER REMOVED AN EMBEDDED LINK HERE ===========
url was:"https://en.wikipedia.org/wiki/Convex_Computer"
linktext was:"a supercomputer"
====================================
I worked on 20 years ago.
Many years ago I built some servers for the university where I was teaching. They got me a fancy system with 4 disks configured as RAID"so it would be really fast". I took apart the RAID and used the disks individually to boost performance. For example, I gave /var its own disk to reduce the seeks between writing log files and doing other things. If you listen closely, you can hear the difference.