From Electron Cloud
Jump to: navigation, search
Line 13: Line 13:
 
Well... turns out ubuntu already has [http://packages.ubuntu.com/dapper/admin/readahead readahead].  But it doesn't use a tarball.  So I imagine it is seeking all over the disk, and there is still room for improvement.
 
Well... turns out ubuntu already has [http://packages.ubuntu.com/dapper/admin/readahead readahead].  But it doesn't use a tarball.  So I imagine it is seeking all over the disk, and there is still room for improvement.
  
Another way would be to simply ensure that the list of files which readahead is going to read, are arranged contiguously on the disk.  XFS supposedly can do that if you [http://oss.sgi.com/projects/xfs/training/xfs_slides_06_allocators.pdf change the attribute (chattr)] the allocator of the directory containing the files, to have the filestreams flag set.  So here's an idea:  make hard-links to the files which are in the readahead list in some directory with that flag set (say, /lib/readahead).  I wonder if the flag having been set for one inode pointing to those files would cause the files to actually be re-arranged into a contiguous space, even though they are pointed to by other inodes?  But there is still the issue that the actual order on disk might not be the same as the boot process needs them to be, even though they are contiguous.  There can still be some (lesser) seek latencies between reads.
+
Another way would be to simply ensure that the list of files which readahead is going to read, are arranged contiguously on the disk.  XFS supposedly can do that if you [http://oss.sgi.com/projects/xfs/training/xfs_slides_06_allocators.pdf change the attribute (chattr)] the allocator of the [http://archives.free.net.ph/message/20070613.052853.975ecad8.en.html directory containing the files], to have the filestreams flag set.  So here's an idea:  make hard-links to the files which are in the readahead list in some directory with that flag set (say, /lib/readahead).  I wonder if the flag having been set for one inode pointing to those files would cause the files to actually be re-arranged into a contiguous space, even though they are pointed to by other inodes?  But there is still the issue that the actual order on disk might not be the same as the boot process needs them to be, even though they are contiguous.  There can still be some (lesser) seek latencies between reads.
  
 
==Links==
 
==Links==

Revision as of 15:15, 22 October 2007

I'm evaluating upstart, init-ng, runit etc. Another idea I thought of along the way:

It has been pointed out (and is obvious in the last 2/3 or so of most bootcharts) that booting is mostly IO-bound. Why is restoring from hibernation so fast? Because probably it's one contiguous block being loaded from disk into memory. So how about trying to somehow cause the boot process to load code into memory from contiguous disk blocks, in the order that the boot process needs the programs to be loaded?

I think this idea could be independent of the version of init or its replacement.

You simply need to make a log of the files that are loaded. Then after booting is complete, make a tarball with those files, in the order that they were necessary. Write an init wrapper which forks off a process (or a thread even) to untar the tarball and "seed" the cache with those files. (I hope there is a system call for that?) Then when enough files have been seeded into the cache for init to begin, start it. Each file that the init process needs will have already been loaded into cache, so the kernel will not need to read from the physical disk in order to satisfy anything which init is running. The other thread or process keeps running in the background, anticipating the next file that init will need.

Because booting this way might change the order that files are needed, after each boot the tarball should be rebuilt (in the background of course, with a high value of "nice", maybe even delayed a few minutes so the user has a chance to start doing something useful first).

If you are using a parallel version of init, it doesn't make much difference - the files might be needed in a different order but we are optimizing the tarball order each time, so equilibrium should be reached quickly. If it becomes static (the order that files are needed is the same as the tarball order) there is no need to rebuild the tarball anymore.

Well... turns out ubuntu already has readahead. But it doesn't use a tarball. So I imagine it is seeking all over the disk, and there is still room for improvement.

Another way would be to simply ensure that the list of files which readahead is going to read, are arranged contiguously on the disk. XFS supposedly can do that if you change the attribute (chattr) the allocator of the directory containing the files, to have the filestreams flag set. So here's an idea: make hard-links to the files which are in the readahead list in some directory with that flag set (say, /lib/readahead). I wonder if the flag having been set for one inode pointing to those files would cause the files to actually be re-arranged into a contiguous space, even though they are pointed to by other inodes? But there is still the issue that the actual order on disk might not be the same as the boot process needs them to be, even though they are contiguous. There can still be some (lesser) seek latencies between reads.

Links

  • here's a review of filesystems on Debian which would seem to indicate XFS or JFS would probably be the best ones for fast booting: [1]
  • sub-2-second booting on an ARM - this seems more like LinuxBIOS, using one instance of Linux to boot another