From Electron Cloud
Jump to: navigation, search

I'm evaluating upstart, init-ng, runit etc. Another idea I thought of along the way:

It has been pointed out (and is obvious in the last 2/3 or so of most bootcharts) that booting is mostly IO-bound. Why is restoring from hibernation so fast? Because probably it's one contiguous block being loaded from disk into memory. So how about trying to somehow cause the boot process to load code into memory from contiguous disk blocks, in the order that the boot process needs the programs to be loaded?

I think this idea could be independent of the version of init or its replacement.

You simply need to make a log of the files that are loaded. Then after booting is complete, make a tarball with those files, in the order that they were necessary. Write an init wrapper which forks off a process (or a thread even) to untar the tarball and "seed" the cache with those files. (I hope there is a system call for that?) Then when enough files have been seeded into the cache for init to begin, start it. Each file that the init process needs will have already been loaded into cache, so the kernel will not need to read from the physical disk in order to satisfy anything which init is running. The other thread or process keeps running in the background, anticipating the next file that init will need.

Because booting this way might change the order that files are needed, after each boot the tarball should be rebuilt (in the background of course, with a high value of "nice", maybe even delayed a few minutes so the user has a chance to start doing something useful first).

If you are using a parallel version of init, it doesn't make much difference - the files might be needed in a different order but we are optimizing the tarball order each time, so equilibrium should be reached quickly. If it becomes static (the order that files are needed is the same as the tarball order) there is no need to rebuild the tarball anymore.

Well... turns out ubuntu already has readahead. But it doesn't use a tarball. So I imagine it is seeking all over the disk, and there is still room for improvement.

Another way would be to simply ensure that the list of files which readahead is going to read, are arranged contiguously on the disk. XFS supposedly can do that if you change the attribute (chattr) the allocator of the directory containing the files, to have the filestreams flag set. So here's an idea: make hard-links to the files which are in the readahead list in some directory with that flag set (say, /lib/readahead). I wonder if the flag having been set for one inode pointing to those files would cause the files to actually be re-arranged into a contiguous space, even though they are pointed to by other inodes? (A pass with xfs_fsr might be necessary, or make a tarball, delete the files, and then put them back from the tarball). But there is still the issue that the actual order on disk might not be the same as the boot process needs them to be, even though they are contiguous. There can still be some (lesser) seek latencies between reads.

For a system which is going to run X, I'd like to try the approach of finding reverse dependencies - assume that getting X up and running is the very first thing we want to do. Can we just do that? What else does it need to have running first? Most other daemons can then be started in the background, right? But there are bound to be some exceptions - e.g. if the application(s) which you will then run after X is up depend on dbus, it must be started first. Maybe some depend on having the network up (but let's hope not... just like in Windows, ethernet cables can be unplugged and plugged back in at any time, and software should be able to deal with that. I have had some success getting Gentoo to behave that way too.)


  • here's a review of filesystems on Debian which would seem to indicate XFS or JFS would probably be the best ones for fast booting: [1]
  • sub-2-second booting on an ARM - this seems more like LinuxBIOS, using one instance of Linux to boot another