Sunday, May 19, 2013

Nokia E71 to Samsung Galaxy S3 - Migrating phone book contacts


     I had to transfer my contacts from Nokia E71 to Samsung Galaxy S3 last month.  I couldn't find any software to do it straight forward.  After a bit of digging up I found the following way.


Nokia E71

     Make sure you have memory card inserted into the phone

     Go To :

Menu -> Communication -> Contacts -> Options 
     -> Mark/Unmark -> Mark all 
     -> Options -> Copy 
     -> To memory card

     This will copy all the contacts to memory card.

     From here, you have two options, either send all conacts via bluetooth to S3 or move the memory card to S3 and then copy the contacts.  I chose the former one.

     Switch on bluetooth on both the devises and (preferably) pair them.

     Go To:
   
Menu -> Office -> File Manager

     Select memory card by pressing right navigation key

     Go To Folder: Other / Contacts

     All the contacts in your phone will be stored here as .vcf file.  For each of your contacts you will find a corresponding .vcf file. 

     From Contacts directory, Go To:

Options -> Mark/Unmark -> Mark all

     Then

Options -> Send -> Via Bluetooth

     All your contacts will get copied over to S3 address book now.  Sit back and enjoy.

Note:  For some other nokia models, there is an option to mark all contacts from the Contacts menu and then directly send via bluetooth.  Unfortunately this feature is missing in E71, hence this work around.

Thursday, March 14, 2013

Erase files and drives securely with shred

     Deleting files or formatting drives do not destroy the data, it just removes pointers to the data.  Which means it is possible to recover the data using sophisticated tools which can look for data in a file system/hard drive without pointers.  While this is good for recovering accidentally deleted files or formated drives, it definitely is bad for sensitive data that you really want to destroy (financial data,passwords etc..).

     The way to erase data completely from a file or device is to overwrite it completely with random data.  Repeating this multiple times decreases even the remote chance of recovering the data.

     In Linux there are many tools, we will examine the "shred" command here.  We can use it either for erasing a file or a drive.

* To erase a file

safeer@lin01:~$sudo /usr/bin/shred -n 10 -z -v /home/safeer/passwords.txt
/usr/bin/shred: /home/safeer/passwords.txt: pass 1/11 (random)...
/usr/bin/shred: /home/safeer/passwords.txt: pass 2/11 (111111)...
/usr/bin/shred: /home/safeer/passwords.txt: pass 3/11 (aaaaaa)...
......................ouput truncated............................
/usr/bin/shred: /home/safeer/passwords.txt: pass 9/11 (555555)...
/usr/bin/shred: /home/safeer/passwords.txt: pass 10/11 (random)...
/usr/bin/shred: /home/safeer/passwords.txt: pass 11/11 (000000)..


The options and their meaning

-n 10 : Overwrite the file 10 times ( 10 passes )
-z : after rewriting specified passes overwrite another pass with all zeroes.  This helps in hiding the fact that disk/file was shred-ed.
-v : Verbose output, provides progress made so far in shredding.


This command will erase the contents of the file, but will keep the file in place.  It is possible that the file size will slightly increase, but the data inside will be all gibberish.

Before shred:

safeer@lin01:~$ ls -l /home/safeer/passwords.txt
-rw-rw-r-- 1 safeer safeer 938848 Mar 13 23:55 /home/safeer/passwords.txt


After shred:

safeer@lin01:~$ ls -l /home/safeer/passwords.txt
-rw-rw-r-- 1 safeer safeer 942080 Mar 14 00:12 /home/safeer/passwords.txt


If you want to remove the file as well, use the "-u" option along with the command

safeer@lin01:~$ sudo /usr/bin/shred -n 10 -z -v -u  /home/safeer/passwords.txt
/usr/bin/shred: /home/safeer/passwords.txt: pass 1/11 (random)...
.........output truncated.........
/usr/bin/shred: /home/safeer/passwords.txt: pass 11/11 (000000)...
/usr/bin/shred: /home/safeer/passwords.txt: removing
/usr/bin/shred: /home/safeer/passwords.txt: renamed to /home/safeer/00000
/usr/bin/shred: /home/safeer/00000: renamed to /home/safeer/0000
/usr/bin/shred: /home/safeer/0000: renamed to /home/safeer/000
/usr/bin/shred: /home/safeer/000: renamed to /home/safeer/00
/usr/bin/shred: /home/safeer/00: renamed to /home/safeer/0
/usr/bin/shred: /home/safeer/passwords.txt: removed


safeer@lin01:~$ ls -l /home/safeer/passwords.txt 
ls: cannot access /home/safeer/passwords.txt: No such file or directory

As you can see, the file is renamed multiple times before it is actually removed, to eliminate any trace of even the filename hanging around some where.

* Now shred-ing a drive/disk.

     We cant use -u option as we don’t want to delete a drive.  Also based on the size of the drive you might need to cut down the number of passes as overwriting the whole drive/disk will take a lot of time.

So this is how we do it:

safeer@lin02:~$ sudo /usr/bin/shred -v -n 2 -z /dev/sdb1
shred: /dev/sdb1: pass 1/2 (random)...
shred: /dev/sdb1: pass 1/2 (random)...55MiB/466GiB 0%
shred: /dev/sdb1: pass 1/2 (random)...95MiB/466GiB 0%
........
........
shred: /dev/sdb1: pass 1/2 (random)...466GiB/466GiB 100%
shred: /dev/sdb1: pass 2/2 (000000)...
shred: /dev/sdb1: pass 2/2 (000000)...795MiB/466GiB 0%
....
....
shred: /dev/sdb1: pass 2/2 (000000)...464GiB/466GiB 99%
shred: /dev/sdb1: pass 2/2 (000000)...465GiB/466GiB 99%
shred: /dev/sdb1: pass 2/2 (000000)...466GiB/466GiB 100%


     As you can see, I am using only two passes here as the disk I am shredding is 500GB sized.  It took me almost 10 hours to complete the first pass, so chose your numbers wisely.

Tuesday, February 19, 2013

Recover lost data from digital media using PhotoRec

     PhotoRec is a data recovery tool designed to recover lost files from digital media like memory cards, hard disks, and cd roms.   It looks for traces and patterns of common file formats in the category of images,documents,archives,songs,videos etc on the raw disk and recovers them.  It is capable of recovering files that are deleted or filesystem that are formatted.  We will see how to do this

    Though a command line tool, most of PhotoRec's options are configured from an interactive ui.  The command line syntax itself is pretty simple.

photorec [/log] [/debug] [/d recup_dir] [device|image.dd|image.e01]

/log - Write the recovery activity to the log photorec.log in current directoy.
/debug - Enable debug mode while logging
/d recup_dir - PhotoRec don't save the recovered files back to the drive ( which makes it safer to use ).  Instead it creates directories in the current directory and write the files to it.  If no name is given, the directories will be named recup_dir.1 recup_dir.2 etc.. If a directory is specified with /d option, that name will be used instead.  PhotoRec ui also provides and option to select a parent directory ( instead of current directory ) under which the recup_dir(s) will be created
.

     PhotoRec can be called with or without an argument.  If called with argument, it should be either a device ( /dev/<whatever> ) or a disk image file.   If no argument is given, PhotoRec will launch the ui and auto detect the drives/devices and prompt the user to select the drive to be recovered.

Running PhotoRec on my 16G pendrive which was formatted recently.

safeer@lin01:~$sudo photorec /log /debug /d RECOVER_USB /dev/sdb

This will launch the PhotoRec UI in shell.  Different options will be presented to the user on a series of screens. Let us see what options are available in each screen.  I am putting only relevant parts of the screen over here.

* Screen 1 - Select device

Select a media (use Arrow keys, then press Enter):
>Disk /dev/sdb - 16 GB / 14 GiB (RO) - hp v220w

>[Proceed ]  [  Quit  ]


     The ">" symbol indicates your selection, use up / down / right / left arrow keys for the selections.  When all needed option in a screen are selected, press "Enter" or proceed.

     Here I am selecting the usb disk.  It is already selected since I provided it as argument to PhotoRec.  If the drive was not provided in advance, an option to select from the available media in the sytem would pop up (see below )

safeer@lin01:~$sudo photorec /log /debug /d RECOVER_USB


Select a media (use Arrow keys, then press Enter):
>Disk /dev/sda - 250 GB / 232 GiB (RO) - HITACHI HTS723225L9SA61  FDE
 Disk /dev/sdb - 16 GB / 14 GiB (RO) - hp v220w

>[Proceed ]  [  Quit  ]


This screen is showing my laptop hard disk as well, and I have to use the down arrow key to select the usb drive.

* Screen 2 - Select partition


 Disk /dev/sdb - 16 GB / 14 GiB (RO) - hp v220w

     Partition                  Start        End    Size in sectors
      No partition             0   0  1 15295  63 32   31326208 [Whole disk]
> 1 P FAT32 LBA                1   0  1 15295  63 32   31324160 [BACKUP_USB_]

>[ Search ]  [Options ]  [File Opt]  [  Quit  ]


Here, PhotoRec will detect and list all the partitions within the selected drive ( in this case only one ).  You can either select a partition or the whole drive depending on whether you know where exactly your lost files are.  At the bottom of the screen you can see options Search/Options/File Opt.  Selecting Search will take you to next screen, selecting Options will give you a few configurable parameters that will affect how PhotoRec will do the recovery, File Opt is where you can select what types of files to look for.  If you know what file type ( jpg, pdf .. ) you are looking for, this option will be very helpful.

* Screen 2.1 - Options

 Paranoid : Yes (Brute force disabled)
 Allow partial last cylinder : No
 Keep corrupted files : No
 Expert mode : No
 Low memory: No
>Quit


* Screen 2.2 File Opts

PhotoRec will try to locate the following files

>[ ]      Own custom signatures
 [X] 1cd  Russian Finance 1C:Enterprise 8
 [X] 7z   7zip archive file
.......output truncated.........
 [X] dex  Dalvik
 [X] diskimage SunPCI Disk Image
    Next
Press s for default selection, b to save the settings
>[  Quit  ]


* Screen 3 - Select filesystem type

 1 P FAT32 LBA                1   0  1 15295  63 32   31324160 [BACKUP_USB_]

To recover lost files, PhotoRec need to know the filesystem type where the
file were stored:
 [ ext2/ext3 ] ext2/ext3/ext4 filesystem
>[ Other     ] FAT/NTFS/HFS+/ReiserFS/...


    Here you can select the file system type in which the files to recovered were stored prior to deletion/formatting.   Press enter after selection.

* Screen 4 - Chose whether to analyse only the free space in the disk or the whole disk.


 1 P FAT32 LBA                1   0  1 15295  63 32   31324160 [BACKUP_USB_]


Please choose if all space need to be analysed:
>[   Free    ] Scan for file from FAT32 unallocated space only
 [   Whole   ] Extract files from whole partition



* Screen 5 - Destination directory selection

     This screen will be displayed only if the "/d" option was not given on the command line.  This will list all directories under the current directory.  You can chose a destination from that.  If you want use this option, better create a destination directory in current working directory in advance.

     Please select a destination to save the recovered files.
Do not choose to write the files to the same partition they were stored on.


Keys: Arrow keys to select another directory
      C when the destination is correct
      Q to quit
Directory /home/safeer
>drwx------  1000  1000     32768 14-Mar-2013 20:14 .
 drwxr-xr-x     0     0      4096 17-Jan-2013 20:22 ..
 drwx------  1000  1000      4096  1-Jan-2012 22:41 Audio
.......


* Screen 6.1 - Recovery - running phase

Disk /dev/sdb - 16 GB / 14 GiB (RO) - hp v220w
     Partition                  Start        End    Size in sectors
     No partition             0   0  1 15295  63 32   31326208 [Whole disk]


Pass 1 - Reading sector   15888406/31326208, 7653 files found
Elapsed time 0h06m00s - Estimated time to completion 0h05m49
cab: 6012 recovered
txt: 662 recovered
tx?: 353 recovered
exe: 351 recovered
mp3: 197 recovered
bmp: 38 recovered
ico: 11 recovered
gif: 7 recovered
doc: 4 recovered
chm: 2 recovered
others: 16 recovered

Stop


* Screen 6.2 - Recovery - final state

Disk /dev/sdb - 16 GB / 14 GiB (RO) - hp v220w
     Partition                  Start        End    Size in sectors
     No partition             0   0  1 15295  63 32   31326208 [Whole disk]


8214 files saved in RECOVER_USB directory.
Recovery completed.


All the recovered files will be saved under directories RECOVER_USB.*

safeer@lin01:~$ ls -ld RECOVER_USB*
drwxr-xr-x 2 root root  4096 Mar 14 11:27 RECOVER_USB.1
drwxr-xr-x 2 root root 20480 Mar 14 11:32 RECOVER_USB.10
drwxr-xr-x 2 root root 20480 Mar 14 11:32 RECOVER_USB.11
....output truncated..................
drwxr-xr-x 2 root root 20480 Mar 14 11:32 RECOVER_USB.7
drwxr-xr-x 2 root root 20480 Mar 14 11:32 RECOVER_USB.8
drwxr-xr-x 2 root root 20480 Mar 14 11:32 RECOVER_USB.9


To find the files of a particular type, say jpg use find command.

safeer@lin01:~$ find RECOVER_USB.* -name "*.jpg"
RECOVER_USB.1/f0039072.jpg
RECOVER_USB.1/f0032720.jpg
RECOVER_USB.18/f31114640.jpg
RECOVER_USB.19/f0037024.jpg
RECOVER_USB.2/f0039072.jpg
RECOVER_USB.2/t0040768.jpg
RECOVER_USB.20/f0037024.jpg
RECOVER_USB.21/t0040768.jpg
RECOVER_USB.21/f0032720.jpg

Tuesday, January 8, 2013

Netcat as a file downloader


     The netcat utility is a multi-purpose tool for managing and manipulating TCP/IP traffic.  In this article, we will see how netcat can be used as a file downloader.  This will come in handy when you don't have utilities like wget/fetch/curl installed in our machine.

     Netcat ( "nc" is the name of the binary ) can establish tcp connection to any server/port combination and send or receive data through the established channel.  To use it as a downloader, our strategy will be:
  • Establish a connection to the http port of the server.
  • Send an HTTP request with the download link to the established connection
  • Redirect the output of the HTTP response to a file ( which will be the download file).
     Let us try downloading apache httpd package from the url  http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.gz

     First, let us establish a TCP connection to port 80 of server apache.techartifact.com.  Command for this is:

/bin/nc apache.techartifact.com 80

     Second, let us construct an HTTP request. This can be done in two ways - using HTTP protocol 1.0 version and HTTP 1.1 version.

     A generic HTTP request format consists of:
  • A request line ( Further contains request methode - "GET"  for download , request URI - the whole/relative download URL , protocol version - HTTP/1.0 or HTTP/1.1 )
  • Multiple lines of HTTP headers ( Each HTTP header is a single line containing a header name and header value separated by a column and space )
  • An empty line
  • Message body
     Each of these lines will be separated by a Carriage Return ( \r ) and a Line Feed ( \n ) characters.

     Though there are many parts for an HTTP request, a bare minimum HTTP  request requires only the following:
  • A request line ( for both HTTP/1.0 and 1.1 version )
  • A host header ( only for HTTP/1.1 , Format is - Host: web.server.name )
  • A blank line
     All separated by a CR and CF ( "\r" & "\n" )
  • HTTP/1.0 request for our download URL is:  GET http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.gz HTTP/1.0\r\n\r\n
  • HTTP/1.1 request for our download URL is:  GET http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.gz HTTP/1.0\r\nHost: apache.techartifact.com\r\n
     When we sent this request, the response from the  server ( if everything is good and file start getting downloaded ) will contain an http response which begins with line "HTTP/1.1 200 OK" followed by multiple header lines, then followed by a blank line ( containing "\r" ) followed by the response data ( which is the actual file to be downloaded ).  So while saving the response to a file we should strip off the http header information part (all lines between and including "HTTP/1.1 200 OK" and "\r").  This can be achieved by a simple sed command.

     To learn more about HTTP visit this link

     Let us try downloading the file with HTTP/1.0:

safeer@penguinepower:/tmp$ echo -e "GET http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.gz HTTP/1.0\r\n\r\n"|nc apache.techartifact.com 80|sed '/^HTTP\/1.. 200 OK\r$/,/^\r$/d' > httpd-2.4.3-with-http-1.0.tar.gz

     Now with HTTP/1.1

safeer@penguinepower:/tmp$ echo -e "GET http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.gz HTTP/1.1\r\nHost: apache.techartifact.com\r\n"|nc apache.techartifact.com 80 | sed '/^HTTP\/1.. 200 OK\r$/,/^\r$/d' > httpd-2.4.3-with-http-1.1.tar.gz

     Let us also download the file with wget utility directly

safeer@penguinepower:/tmp$ wget -q http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.gz -O httpd-2.4.3-with-wget.tar.gz
     Now compare all the files downloaded to ensure they are all the same.

safeer@penguinepower:/tmp$ du -bs httpd-2.4.3-with-*
6137268 httpd-2.4.3-with-http-1.0.tar.gz
6137268 httpd-2.4.3-with-http-1.1.tar.gz
6137268 httpd-2.4.3-with-wget.tar.gz

safeer@penguinepower:/tmp$ md5sum httpd-2.4.3-with-*
538dccd22dd18466fff3ec7948495417  httpd-2.4.3-with-http-1.0.tar.gz
538dccd22dd18466fff3ec7948495417  httpd-2.4.3-with-http-1.1.tar.gz
538dccd22dd18466fff3ec7948495417  httpd-2.4.3-with-wget.tar.gz


Let us ensure the integrity of the downloaded files by comparing their md5 with the value given in apache website

safeer@penguinepower:/tmp$ curl -s http://www.apache.org/dist/httpd/httpd-2.4.3.tar.gz.md5
538dccd22dd18466fff3ec7948495417 *httpd-2.4.3.tar.gz

Everything looks good now.

Note: This command can download from servers on which the file is actually located ( on the given port and location as in the URL ).  I haven't tested the case where the the actual file is behind a proxy and the download url redirects you to the correct location ( with an HTTP 302 message).  That situation will need some more logic.





Friday, January 4, 2013

Finding the size of a remote file from its URL


Consider the following cases,

You are about to download a file from web, and before downloading, you want to know the size of that remote file

                           OR

You have downloaded a file earlier, but you are doubtful whether the file has been downloaded fully or not.

From a browser like Firefox or Chrome, you can go to the download url and a window will popup asking you whether to save or open the file.  The same popup will also mention the size of the remote file.  While this is one way of doing it in many case you might want to do this for multiple files and/or save this information in a report or use it inside another script or task.  In such scenarios it is desirable to do this from command line, and the following script will show you how to do it.

To do this, we should have the curl utility installed, on my machine it is installed as "/usr/bin/curl".  We will first see the script and then the explanation to it.

safeer@penguinpower:~$ curl -sI DOWNLOAD_URL |awk '/Content-Length/{ print $2 }'

This will provide you the remote file size in bytes.

The basic idea is to process the HTTP headers associated with the download link, without actually downloading the file.

HTTP header of a URL contains useful information like, return status of your request, server details, content type, content length etc..  In this case, we are interested in the "Content Length" field which provides the size of the remote content ( downloadable file in this case ) in bytes.



Let us examine a typical HTTP header.  This header is for the download URL of Apache HTTPD web server.  We use curl with -s ( silent ) and -I ( info ie; header information ) flags to obtain this.

safeer@penguinpower:/tmp$ curl -sI http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.bz2
HTTP/1.1 200 OK
Date: Sun, 16 Dec 2012 19:46:14 GMT
Server: Apache/2.2.23 (Unix) mod_ssl/2.2.23 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_mono/2.6.3 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_jk/1.2.35 mod_qos/9.74 mod_perl/2.0.6 Perl/v5.8.8
Last-Modified: Mon, 20 Aug 2012 13:22:55 GMT
ETag: "8708060-4591af-4c7b2684fa9c0"
Accept-Ranges: bytes
Content-Length: 4559279
Connection: close
Content-Type: application/x-tar


There are multiple fields in the header as you can see, but our interest is in the Content-Length field, which has a value of 4559279.  So as per the header, the size of the httpd package is 4559279 bytes.

Let us cross verify by downloading the file.

safeer@penguinpower:/tmp$ wget -q http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.bz2
safeer@penguinpower:/tmp$ du -b httpd-2.4.3.tar.bz2

4559279 httpd-2.4.3.tar.bz2

Well, the file size is indeed 4559279 bytes.


Now we know how to extract the content length from the HTTP header.  But is that all? what if the webserver is functioning but the url is not available or some other issue prevent you from downloading the header?  This may not be a problem when you are actually looking at the terminal while the command is running, but within a script or report this may not be a good idea.

To solve this, we first check the HTTP status code ie; the first line in the header to see if the response is 200 ( OK ).  Only then we return the content length, else we return a negative value so that your script / report can identify that.  In this case that value will be the negative of the HTTP response code ( when different from 200 OK ).  So the response code will tell you whether the script failed and for what reason.  This will help you in designing the fail-safe logic of the script.

safeer@penguinpower:/tmp$curl -sI http://apache.techartifact.com/mirror/httpd/httpd-2.4.3.tar.bz2 | awk '/^HTTP\/1./{if ( $2 == 200 ) { do { getline } while ( $1 != "Content-Length:" ) ; print $2} else { print "-"$2 } }
 
The awk first looks for the HTTP status code and if the code is "200 OK" the getline function advances the current record pointer to next line until the first part of the current line is "Content-Length", then prints the second part of the line which is the content length in bytes.  Otherwise, the script prints the negative of the HTTP status code.

Note: We still need to take care of the curl errors where there wont be any response from the webserver ( no dns/network connection etc.. ) in our scripts logic.  Use the exit code of curl ( a non a negative value ) for finding that out.