Creating an usb-stick of the current system

Discussion in 'Linux' started by Monkey, Jul 23, 2008.

  1. Monkey

    nugroho2

    Joined:
    Oct 6, 2008
    Messages:
    50
    Likes Received:
    0
    4 Gb is the limit of file size for FAT format. You have to format the USB in Linux format, e.g. ext2.
     
    nugroho2, Nov 28, 2008
    #81
  2. Monkey

    jhedrotten

    Joined:
    Sep 11, 2008
    Messages:
    208
    Likes Received:
    0
    Location:
    Manila, Philippines
    I have tried doing this using DSL.

    I do not have a swap partition, did a lot of mods already but i keep my files in my 16gb sdhc [got it prioritized by using tdp in aufs] and my image is only 2gb but it took one hour and a half i think, which is my only problem.

    i used a 512mb kingston for dsl and a 160gb FAT32 formatted WD My passport essential. i wonder what took it so long.
     
    jhedrotten, Nov 29, 2008
    #82
  3. Monkey

    nugroho2

    Joined:
    Oct 6, 2008
    Messages:
    50
    Likes Received:
    0
    dd does not copy files. Size of the files to be copied is not an issue. It copies the entire disk. But 1.5 hours seems rather long to me also. Do you copy the 16 Gb SDHC or the 8 Gb? Looks like you are copying the 16 Gb? If that's the case, than 1.5 hours is about right.
     
    nugroho2, Nov 30, 2008
    #83
  4. Monkey

    jhedrotten

    Joined:
    Sep 11, 2008
    Messages:
    208
    Likes Received:
    0
    Location:
    Manila, Philippines
    i tried it again on a 8gb ext2-formatted partition on my portable hard disk.

    i am pretty sure that it is the 8gb i am trying to dd, because i mount it first to check the contents, and DSL sees it as hdc1, whereas DSL does not see the SDHC (maybe it does not recognize both card readers).

    it took quite as long as with my first try (using FAT32), and it ended with an input/output error but when i checked hdimage.gz , it is indeed 3.8gb, which i think is accurate considering my mods.

    any ideas about the input/output error? is the image i have reliable? i don't care how long it takes, i could always sleep while doing it (lol) but i am bothered by that input/output error that it shows after finishing. I tried to delete hdimage.gz and and did the command a third time on the 8gb ext2 partition, and again it ended with a input/output error, but giving me a 3.8gb hdimage.gz nonetheless.

    and in case you want my exact command it is (executed as root):

    dd if=/dev/hdc1 | gzip > /mnt/sdb1/backup/hdimage.gz

    help please.
     
    jhedrotten, Nov 30, 2008
    #84
  5. Monkey

    RockDoctor

    Joined:
    Aug 21, 2008
    Messages:
    963
    Likes Received:
    0
    Location:
    Minnesota, USA
    One thing I do before backing up a partition is to run Gparted to reduce the size of that partition to the absolute minimum. Why? So that when I restore it, I can restore it into a partition that may be somewhat smaller than the original. I haven't tried it, but I very strongly suspect dd would do nasty things if I tried to restore what it thinks is 8GB of material when uncompressed into a 7GB partition. Once my partition is backed up (or restored), I use Gparted to resize the partition to the desired size.
     
    RockDoctor, Nov 30, 2008
    #85
  6. Monkey

    jhedrotten

    Joined:
    Sep 11, 2008
    Messages:
    208
    Likes Received:
    0
    Location:
    Manila, Philippines
    oh, for example your SSD partition only occupies 3.9GB (as in my case) out of the 8GB SSD, you run Gparted to shrink that partition to exactly 3.9GB? But wouldn't you need to defrag it first to avoid data loss? in my opinion i think it is too risky.

    but my issue is weird, with that command, the partition is supposed to be gzipped, i had 3.9GB of data in my SSD partition and yet the hdimage.gz is 3.8GB [how come I only have 100MB difference from the actual partition and the compressed image] and yet they only get about an average of almost 2+GB? I do not have any problems if gzip can only reduce 100MB out of the original partition size but how come the dd process ended with an input/output error? it bugs me a lot since i do not know if the image that I have created would lead to a reliable restore if necessary. any ideas?

    and would someone tell me how many GB they have occupied in the SSD and the size of their gzipped image? just to figure things out. i do not have any idea how compressed they get nor if my backup was successful since it ended with an error.
     
    jhedrotten, Nov 30, 2008
    #86
  7. Monkey

    RockDoctor

    Joined:
    Aug 21, 2008
    Messages:
    963
    Likes Received:
    0
    Location:
    Minnesota, USA
    If we're talking Linux with ext2/ext3, no defragging is necessary (not sure if it's even possible). Gparted won't let you shrink the partition beyond the minimum needed for your data.
    dd will copy the whole 8GB and gzip will try to compress the whole 8GB. if you have leftover junk beyond your 3.9 GB of actual data, it's not going to be ignored (unless you shrink the partition first!)
    Idea #1: you ran out of temporary space needed for the copying/compression
    Idea #2: the filesystem needs repairing - time to learn how to use fsck.
    These are off my HDD, but should give you an idea. My 1.8-1.9 GB installs reduced to about 700MB.
     
    RockDoctor, Nov 30, 2008
    #87
  8. Monkey

    jhedrotten

    Joined:
    Sep 11, 2008
    Messages:
    208
    Likes Received:
    0
    Location:
    Manila, Philippines
    Idea #1: you ran out of temporary space needed for the copying/compression
    Idea #2: the filesystem needs repairing - time to learn how to use fsck.
    These are off my HDD, but should give you an idea. My 1.8-1.9 GB installs reduced to about 700MB.[/quote:1x6q7fg3]


    oh right, excuse my dumbness about defragmenting. thanks for the good and fast answers. :D

    i'm sorry but i have to ask a new set of questions:

    but which of my filesystems needs repairing though, the external 2.5 HD (formatted ext2) or the SSD (ext2) as well?

    and on which drive could i have run out of space for temporary files? how come i am the only one who have experienced this?

    and maybe you could teach me a bit about fsck. or at least give me a valuable link for reference. I only need to make a one time backup so that i will not have to reinstall everything from scratch if ever something happens, how come this is too much too ask? hahaha jus keeding :D
     
    jhedrotten, Nov 30, 2008
    #88
  9. Monkey

    RockDoctor

    Joined:
    Aug 21, 2008
    Messages:
    963
    Likes Received:
    0
    Location:
    Minnesota, USA
    I'd check both; without any additional info on the error, It's not clear what the cause was.
    most likely the SSD
    You're lucky

    For help with fsck:
    Code:
     man e2fsck
    or
    Code:
    e2fsck --help
    Since you're working with ext2 filesystems, use the ext2 version of fsck. Also, it's generally not a good idea (meaning you should only do it in utter desperation) to run fsck on a mounted filesystem. Thirdly, for fsck to make any repairs, it needs to be run as root. Good luck!
     
    RockDoctor, Nov 30, 2008
    #89
  10. Monkey

    nugroho2

    Joined:
    Oct 6, 2008
    Messages:
    50
    Likes Received:
    0
    My 8 Gb disk was gzipped into a 4.2 Gb file. I believe it is due to fragmentation of files.

    I use my USB disk to boot, and use one ext2 partition of my harddisk for the hdimage.gz.
    I am just wondering of 2 things:

    • 1. can one make a good backup if the main SSD that is being copied is used for booting.
    • 2. is it not easier just to copy the whole SSD into the hardisk? 4 Gb more space is acceptable to me. I haven't tried that.

    As for the error, you should not just dismiss it.
     
    nugroho2, Nov 30, 2008
    #90
  11. Monkey

    jhedrotten

    Joined:
    Sep 11, 2008
    Messages:
    208
    Likes Received:
    0
    Location:
    Manila, Philippines
    most likely the SSD
    You're lucky

    For help with fsck:
    Code:
     man e2fsck
    or
    Code:
    e2fsck --help
    Since you're working with ext2 filesystems, use the ext2 version of fsck. Also, it's generally not a good idea (meaning you should only do it in utter desperation) to run fsck on a mounted filesystem. Thirdly, for fsck to make any repairs, it needs to be run as root. Good luck![/quote:27iomzq2]

    seems like a lot of work for me T_T

    i might not have the time. T_T

    so i might just use fwbackups, at least with that i only have to import my config and keep it :D

    is it possible for me to run fsck using a damn small linux installed in a 512MB flash disk so i do not have to mount hdc1 and sdb1 which is my SSD and external HD respectively? i think im going to give it one more time. do you think i should resize the partition first to its minimum? do you think that would help? :D
     
    jhedrotten, Nov 30, 2008
    #91
  12. Monkey

    RockDoctor

    Joined:
    Aug 21, 2008
    Messages:
    963
    Likes Received:
    0
    Location:
    Minnesota, USA
    I believe you're wrong. It depends on how much space the filesystem was actually using and what types of files the filesystem contained.
    Yes if you have a separate boot partition that contains just /boot, no otherwise.
    No (I think that's the grammatically correct answer, but read on). It is easier to just copy the whole SSD if you've got the space.
     
    RockDoctor, Nov 30, 2008
    #92
  13. Monkey

    nugroho2

    Joined:
    Oct 6, 2008
    Messages:
    50
    Likes Received:
    0
    RockDoctor, thanks for the comments.

    I just do not understand that in Linux there is no fragmentation. When I ran a clean-install of the AA1 plus some apps, the size of the hdimage.gz was around 2.7 Gb. Free space in SSD was around 3.6 Gb. Then I did a lot of install and uninstall of applications, and free space was around 3.2 Gb. When I did the dd, the hdimage.gz increased to 4.2 Gb. I tried to zero-in the unused space. No use. The size of gzipped file was the same. That's why I believe this is caused by the fragmentation.
     
    nugroho2, Dec 1, 2008
    #93
  14. Monkey

    RockDoctor

    Joined:
    Aug 21, 2008
    Messages:
    963
    Likes Received:
    0
    Location:
    Minnesota, USA
    Free space is free space, whatever the size chunks. Looks like you've got about 400K of extraneous files lying around after your install/uninstall efforts. When you do the dd, it takes the full 8GB, all the files and all the random crap that is not in a file but is not zeroed out unless you did that explicitly. Random crap doesn't compress well.
     
    RockDoctor, Dec 1, 2008
    #94
  15. Monkey

    rbil

    Joined:
    Aug 14, 2008
    Messages:
    730
    Likes Received:
    0
    Location:
    The Wet Coast, Canada
    Indeed there is fragmentation on a hard drive running Linux. The thing is, that fragmentation isn't an issue with Linux. It's a true multi-user operating system and designed to accomodate fragmentation (along with being designed to keep fragmentation to a minimum).

    Disk fragmentation has NO bearing on what the dd command is doing. dd does a bit by bit copy, so where files are located is not the issue. The dd copy is run through gzip to compress the image as it's written. Any "unused" parts of the drive are also compressed as the bits are streamed from dd to gzip. If the "unused" parts of the drive contain bits that all contain just zeroes, the compression algorithym can be very efficient. If there are bits that are not zero, then gzip has to deal with them, resulting in a larger gz file. Zeroing out the "unused" parts of the drive can dramatically reduce the size of the resulting gz file.

    Here is a simple explanation of the difference between a multi-user OS like Linux and Windows in terms of fragmentation:

    http://geekblog.oneandoneis2.org/index.php/2006/08/17/why_doesn_t_linux_need_defragmenting

    Cheers.
     
    rbil, Dec 1, 2008
    #95
  16. Monkey

    nugroho2

    Joined:
    Oct 6, 2008
    Messages:
    50
    Likes Received:
    0
    Thanks for the explanation on fragmentation... Really useful.

    But somehow I did not see significant improvement in gzipped file size after deleting the thrash (in user and in root) and zeroing out the "unused" parts of the drive as described earlier in this thread. I even tried different block sizes. But I have given up thinking about this as I already have one gzipped copy in my harddisk :D
     
    nugroho2, Dec 1, 2008
    #96
  17. Monkey

    jhedrotten

    Joined:
    Sep 11, 2008
    Messages:
    208
    Likes Received:
    0
    Location:
    Manila, Philippines
    yey, i got it working now.

    it seems like all i need to do is to replace 'hdc1' to 'hdc' and then records in and out matches.

    thanks to everyone who have helped me. :D
     
    jhedrotten, Dec 2, 2008
    #97
  18. Monkey

    nugroho2

    Joined:
    Oct 6, 2008
    Messages:
    50
    Likes Received:
    0
    Just sharing a "weird" and wonderful experience using the dd. Yesterday I was saved by this dd backup.

    I installed dosfstools.rpm for vfat format, ran gparted, modified a partition size and formatted a USB with FAT32. All went fine, but after half an hour suddenly the Thunar showed weird behavior. I closed the AA1 and restarted. It went blank, with only a big X as the mouse pointer. All black. (will be posted in a separate post).

    I used the restore function, and voila, my AA1 was alive again as it was one month ago. Then I tried several things. I did a backup again and got hdimage.gz of 4.3 GB. Then I did the zeroing out the "unused" parts of the drive. However, I did not finished the process as I did in the past, but stopped it half-way with Ctrl-C. The zero.bin file, around 2 Gb, was removed. I did a second zeroing out, and ran the dd backup. The size of hdimage.gz is now 1.3 Gb ! I said this is impossible as when I clean-reinstalled AA1 2 months ago, the gzipped file was 2.7 Gb.

    Then I tested the zipped file, randomly put some files in several folders in file system, reinstalled the dosfstools, and restore the backup. The restore went well, and the AA1 was running properly with this 1.3 Gb gzipped file; the added files were gone! Weird ... but true.

    I do not have time to test whether the result would be the same if I had let the zeroing-out went to finish (with the notice, "no more disk space"). In the past, it did not help reduce the zipped size. (Don't say I wrote the wrong commands as the same command is used).
     
    nugroho2, Dec 3, 2008
    #98
  19. Monkey

    rbil

    Joined:
    Aug 14, 2008
    Messages:
    730
    Likes Received:
    0
    Location:
    The Wet Coast, Canada
    Not surprising since Acer when they imaged their originating SSD drive to make their recovery image, they didn't bother to zero the drive first. :)

    Cheers.
     
    rbil, Dec 3, 2008
    #99
Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.