Cmds

mtu

ifconfig eth0 mtu 1400  #  1360,  default is 1500

dpkg-reconfigure

dpkg-reconfigure kdm
dpkg-reconfigure gdm

rfkill

# ifconfig wlan0 up
SIOCSIFFLAGS: Operation not possible due to RF-kill
# rfkill list
0: phy0: Wireless LAN
        Soft blocked: yes
        Hard blocked: no
# rfkill unblock 0
# rfkill list
0: phy0: Wireless LAN
        Soft blocked: no
        Hard blocked: no
# ifconfig wlan0 up

Users and Groups name list

getent passwd | awk -F':' '{ print $1}'
getent passwd | awk -F: '{print $1}' | while read name; do groups $name; done
kuser (KDE User Manager)

Run wireshark with capture packets privilege

http://wiki.wireshark.org/CaptureSetup/CapturePrivileges

setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
groupadd wireshark
usermod -a -G wireshark or
chgrp wireshark /usr/bin/dumpcap
chmod 4750 /usr/bin/dumpcap
dpkg-reconfigure wireshark-common

  ┌────────────────────────────────────────────────────────────────┤ Configuring wireshark-common ├─────────────────────────────────────────────────────────────────┐
  │                                                                                                                                                                 │
  │ Dumpcap can be installed in a way that allows members of the "wireshark" system group to capture packets. This is recommended over the alternative of running   │
  │ Wireshark/Tshark directly as root, because less of the code will run with elevated privileges.                                                                  │
  │                                                                                                                                                                 │
  │ For more detailed information please see /usr/share/doc/wireshark-common/README.Debian.                                                                         │
  │                                                                                                                                                                 │
  │ Enabling this feature may be a security risk, so it is disabled by default. If in doubt, it is suggested to leave it disabled.                                  │
  │                                                                                                                                                                 │
  │ Should non-superusers be able to capture packets?                                                                                                               │
  │                                                                                                                                                                 │
  │                                                 <Yes>                                                    <No>                                                   │
  │                                                                                                                                                                 │
  └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Install, Remove, Purge and get Info of Packages

To install package

dpkg -i package-file-name

To remove (uninstall) package

dpkg -r package-file-name

To Purge package

dpkg -P package-file-name

To get info of package

dpkg -l | grep 'package-file-name'

Create A Local Debian Mirror With apt-mirror

http://www.howtoforge.com/local_debian_ubuntu_mirror

apt-get install apt-mirror

vim /etc/apt/mirror.list

        set base_path    /mnt/sdc1/OR/apt-mirror
        # set mirror_path  $base_path/mirror
        # set skel_path    $base_path/skel
        # set var_path     $base_path/var
        # set cleanscript $var_path/clean.sh
        # set defaultarch  <running host architecture>
        # set postmirror_script $var_path/postmirror.sh
        # set run_postmirror 0
        set nthreads     20
        set _tilde 0
        deb http://172.16.1.210/repo/debian testing  main contrib non-free # 32 bit
        deb-amd64 http://172.16.1.210/repo/debian testing  main contrib non-free  # 64 bit
        # set cleanscript $var_path/clean.sh
        clean http://172.16.1.210/repo/debian

su - apt-mirror -c apt-mirror

/mnt/sdc1/OR/apt-mirror/var/clean.sh

Named pipe

In computing, a named pipe (also known as a FIFO for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC).

The concept is also found in Microsoft Windows, although the semantics differ substantially.

A traditional pipe is “unnamed” because it exists anonymously and persists only for as long as the process is running.

A named pipe is system-persistent and exists beyond the life of the process and must be deleted once it is no longer being used.

Processes generally attach to the named pipes (usually appearing as a file) to perform inter-process communication.

Instead of a conventional, unnamed, shell pipeline, a named pipeline makes use of the filesystem.

It is explicitly created using mkfifo() or mknod(), and two separate processes can access the pipe by name, one process can open it as a reader, and the other as a writer.

mkfifo /tmp/testfifo
tail -f /tmp/testfifo

and in another console:

echo HELLO! > /tmp/testfifo

Give Privilege to a non-root process to bind to ports under 1024

setcap 'cap_net_bind_service=+ep' $(readlink -f `which python`)

How do I test whether a number is prime?

http://www.madboa.com/geek/openssl/#prime-test

$ openssl prime 119054759245460753
1A6F7AC39A53511 is not prime

You can also pass hex numbers directly.

$ openssl prime -hex 2f
2F is prime

Redirect output to null

$ echo 123 >/dev/null 2>&1

cron

You do not have to restart cron every time you make a change because cron always checks for changes, But to restart cron whenever you made change:

$ service crond restart

Display the current crontab:

$ crontab -l

Edit the current crontab:

$ crontab -e

Syntax of crontab (field description)

* * * * * /path/to/command arg1 arg2

* * * * * command to be executed
- - - - -
| | | | |
| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)

How do I use operators?

An operator allows you to specifying multiple values in a field. There are three operators:

The asterisk (*):

This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.

The comma (,):

This operator specifies a list of values, for example: “1,5,10,15,20, 25”.

The dash (-):

This operator specifies a range of values, for example: “5-15” days , which is equivalent to typing “5,6,7,8,9,....,13,14,15” using the comma operator.

The separator (/):

This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2.

Resources:

http://crontab.guru/

Generate random base64 characters

$ openssl rand -base64 741

SSH

# socks5 proxy with dynamic tcp/ip
$ ssh -D 8080 user@remote_host
$ ssh -L 8080:localhost:80 user@remote_host
# connect to remote running program on the remote host, for example TinyProxy
$ ssh -N user@remote_host -L 8080:localhost:8888

Set Socket Buffer Sizes

# sysctl -w net.core.rmem_max=2096304
# sysctl -w net.core.wmem_max=2096304

Ping

-s packetsize

Specifies the number of data bytes to be sent. The default is 56, which translates into 64 ICMP data bytes when combined with the 8 bytes (in my local system, 28 bytes) of ICMP header data.

-M pmtudisc_opt

Select Path MTU Discovery strategy. pmtudisc_option may be either do (prohibit fragmentation, even local one), want (do PMTU discovery, fragment locally when packet size is large), or dont (do not set DF flag).

# ping -c 1 -M do -s 1472  google.com
PING google.com (173.194.113.167) 1472(1500) bytes of data.
1480 bytes from www.google.com (173.194.113.167): icmp_seq=1 ttl=42 time=262 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 262.920/262.920/262.920/0.000 m

Secure copy

$ scp -r Prj username@remote_ip:/directory/path/in/remote/ip/

Change owner of directory

$ chown -R or:or .

Locate/print block device attributes

# blkid
/dev/sda6: UUID="2fc31bf0-68f1-4566-975b-cb995277db10" TYPE="swap"
/dev/sda1: UUID="ec3c1569-29bb-4a63-bd75-337c57c7b600" TYPE="ext4"

Create a new UUID value

$ uuidgen
d2ad5b28-b306-4096-aca2-dd66c37da5af

Create a new ssh key

$ ssh-keygen -t rsa -C "mail@example.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/or/.ssh/id_rsa): /home/or/.ssh/bitbucket_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/or/.ssh/bitbucket_rsa.
Your public key has been saved in /home/or/.ssh/bitbucket_rsa.pub.
$ ssh-add ~/.ssh/bitbucket_rsa
$ vim ~/.ssh/config
IdentityFile ~/.ssh/bitbucket_rsa
$ chmod 400 ~/.ssh/bitbucket_rsa

Run process as background and never die

$ nohup node server.js > /dev/null 2>&1 &
$ ./run.py > /dev/null 2>&1 &
  1. nohup means: Do not terminate this process even when the stty is cut off.
  2. > /dev/null means: stdout goes to /dev/null (which is a dummy device that does not record any output).
  3. 2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null).
  4. & at the end means: run this command as a background task.

Eject CD/DVD-ROM

eject - eject removable media

$ eject
$ eject -t
-t
With this option the drive is given a CD-ROM tray close command. Not all devices support this command.

Search for a package

$ apt-cache search package_name

Un mount cd-rom device that is busy error

# umount /cdrom
# fuser -km /cdrom
# umount -l /mnt

Login with linux FTP username and password

$ ftp ftp://username:password@my.domain.com

Debug SSH

# ssh -vT root@127.0.0.1

Detect ssh authentication types available

ssh -o PreferredAuthentications=none   127.0.0.1
Permission denied (publickey,password).

ssh -o PreferredAuthentications=none   127.0.0.2
Permission denied (publickey).

ssh -o PreferredAuthentications=none   127.0.0.3
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

http://stackoverflow.com/questions/3585586/how-can-i-programmatically-detect-ssh-authentication-types-available

Avoid SSH’s host verification for known hosts?

ssh -o "StrictHostKeyChecking no" 127.0.0.1

http://superuser.com/questions/125324/how-can-i-avoid-sshs-host-verification-for-known-hosts

Set environment variables on linux

$ export PATH=${PATH}:/home/or/bin

Base64 decode encode

or@debian:~$ echo 'Test' | base64
VGVzdAo=

or@debian:~$ echo 'Test' | base64  | base64 -d
Test

Extract compressed files

# Decompressed a file that is created using gzip command.
# File is restored to their original form using this command.
$ gzip -d mydata.doc.gz
$ gunzip mydata.doc.gz

# Decompressed a file that is created using bzip2 command.
# File is restored to their original form using this command.
$ bzip2 -d mydata.doc.bz2
$ gunzip mydata.doc.bz2

# Extract compressed files in a ZIP archive.
$ unzip file.zip
$ unzip data.zip resume.doc

# Untar or decompressed a file(s) that is created using tar compressing through gzip and bzip2 filter
$ tar -zxvf data.tgz
$ tar -zxvf pics.tar.gz *.jpg
$ tar -jxvf data.tbz2

# Extract tar files and to another directory
$ tar -xvf archive.tar -C /target/directory

# List files from a GZIP archive
$ gzip -l mydata.doc.gz

# List files from a ZIP archive
$ unzip -l mydata.zip

# List files from a TAR archive
$ tar -ztvf pics.tar.gz
$ tar -jtvf data.tbz2

# To unzip a file that is only compressed with bz2 use
$ bunzip2 filename.bz2

# To unzip things that are compressed with .tar.bz2 use
$ tar -xvjpf filename.tar.bz2

# To unzip things that are compressed with  .gz use
$ gunzip file.doc.gz

Options for tar files:

Type at the command prompt

tar xvzf file-1.0.tar.gz – to uncompress a gzip tar file (.tgz or .tar.gz) tar xvjf file-1.0.tar.bz2 – to uncompress a bzip2 tar file (.tbz or .tar.bz2) tar xvf file-1.0.tar – to uncompressed tar file (.tar)

x = eXtract, this indicated an extraction c = create to create ) v = verbose (optional) the files with relative locations will be displayed. z = gzip-ped; j = bzip2-zipped f = from/to file … (what is next after the f is the archive file)

The files will be extracted in the current folder. HINT: if you know that a file has to be in a certain folder, move to that folder first. Then download, then uncompress – all in the correct folder. Yes, I’m lazy.. no I don’t like to copy files between directories, and then delete others to clean up. Download them in the correct directory and save yourself 2 jobs.

List All Environment Variables

$ env

$ printenv

$ printenv | less

$ printenv | more

Set Environment variable

$ export MY_VAR="my_val"

Set proxy in command line

$ export http_proxy="http://127.0.0.1:8080"
$ export https_proxy="https://127.0.0.1:8080"
$ export ftp_proxy="http://127.0.0.1:8080"

How can you completely remove a package?

http://askubuntu.com/questions/151941/how-can-you-completely-remove-a-package

$ sudo apt-get purge package_name

This does not remove packages that were installed as dependencies, when you installed the package you’re now removing.

Assuming those packages aren’t dependencies of any other packages,

and that you haven’t marked them as manually installed,

you can remove the dependencies with:

$ sudo apt-get autoremove

or (if you want to delete their systemwide configuration files too):

$ sudo apt-get --purge autoremove

How to forward X over SSH from Ubuntu machine ...

http://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-from-ubuntu-machine

X11 forwarding needs to be enabled on both the client side and the server side.

On the client side, the -X (capital X) option to ssh enables X11 forwarding,

and you can make this the default (for all connections or for a specific conection)

with ForwardX11 yes in ~/.ssh/config.

On the server side, edit the /etc/ssh/sshd_config file, and uncomment the following line:

X11Forwarding Yes

The xauth program must be installed on the server side.

$ aptitude install xauth

After making this change, you will need to restart the SSH server. To do this on most UNIX’s, run:

$ /etc/init.d/sshd restart

To confirm that ssh is forwarding X11,

Check for a line containing Requesting X11 forwarding in the output:

$ ssh -v -X USER@SERVER

Note that the server won’t reply either way.

SOCKS server and/or client

http://www.delegate.org/delegate/SOCKS/

http://ajitabhpandey.info/2011/03/delegate-a-multi-platform-multi-purpose-proxy-server/

Download delegate from http://delegate.hpcc.jp/anonftp/DeleGate/bin/linux/latest/ and extract it.

Then run binary file as:

Run a Http proxy that is connected to a socks:

$ ./dg9_9_13 -P8080 SERVER=http SOCKS=127.0.0.1:9150 ADMIN="local@localhost.com"

SSH hangs on debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

Edit /etc/ssh/ssh_config, uncomment the following lines

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
GSSAPIAuthentication yes
GSSAPIDelegateCredentials no
MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160

Add the following line:

HostKeyAlgorithms ssh-rsa,ssh-dss

Also change MTU may be useful:

ifconfig eth0 mtu 578

http://superuser.com/questions/699530/git-pull-does-nothing-git-push-just-hangs-debug1-expecting-ssh2-msg-kex-ecd

What will this command do?

$ exec 2>&1

The 1 number refer to stdout, and The 2 number refer to stderr

it duplicates, or copies, stderr onto stdout.

When you run a program, you’ll get the normal output in stdout, but any errors or warnings usually go to stderr. If you want to pipe all output to a file for example, it’s useful to first combine stderr with stdout with 2>&1

http://stackoverflow.com/questions/1216922/sh-command-exec-21

http://www.catonmat.net/blog/bash-one-liners-explained-part-three/

Sample guake script

$ vim /home/or/workspace/bin/start.guake.sh

guake -r "or";
guake -n New_Tab -r "root"; -e "su";
guake -n New_Tab  -r "Ipython 2" -e "ipython";
guake -n New_Tab  -r "workspace" -e "cd /home/or/workspace/;clear;";
guake -n New_Tab  -r "prj" -e "cd /home/or/workspace/prj/;clear;";

$ chmod +x vim /home/or/workspace/bin/start.guake.sh

Verify that apt is pulling from the right repository

$ apt-cache policy <Packge-Name>

Example:

$ apt-cache policy docker-engine

Output:

Installed: 1.9.1-0~stretch
Candidate: 1.9.1-0~stretch
Version table:
*** 1.9.1-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
        100 /var/lib/dpkg/status
 1.9.0-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.8.3-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.8.2-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.8.1-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.8.0-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.7.1-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.7.0-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.6.2-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.6.1-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.6.0-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
 1.5.0-0~stretch 500
        500 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages

Operation not permitted on file with root access

# ls -la   /etc/resolv.conf
-r--r--r-- 1 root root 56 Jan  7 22:39 /etc/resolv.conf

# chmod u+rwx  /etc/resolv.conf
chmod: changing permissions of ‘/etc/resolv.conf’: Operation not permitted

# lsattr /etc/resolv.conf
----i--------e-- /etc/resolv.conf

# chattr -i  /etc/resolv.conf
# lsattr /etc/resolv.conf
-------------e-- /etc/resolv.conf

How To Add User

$ sudo adduser <username>

How To Delete a User

$ sudo deluser <username>
$ sudo userdel <username>

If, instead, you want to delete the user’s home directory when the user is deleted, you can issue the following command as root:

$ sudo deluser --remove-home <username>
$ sudo deluser -r <username>
$ sudo userdel -r <username>

Changing User Password

$ sudo passwd <username>

Allowing other users to run sudo

$ sudo adduser <username> sudo
$ vim /etc/sudoers
    # User privilege specification
    root    ALL=(ALL:ALL) ALL
    or      ALL=(ALL:ALL) ALL
    # Allow members of group sudo to execute any command
    %sudo   ALL=(ALL:ALL) ALL

http://askubuntu.com/questions/7477/how-can-i-add-a-new-user-as-sudoer-using-the-command-line

https://help.ubuntu.com/community/RootSudo#Allowing_other_users_to_run_sudo

How to delete a user from one group

$ groupdel group

http://www.computerhope.com/unix/groupdel.htm

Remove sudo privileges from a user (without deleting the user)

$ sudo deluser username sudo

http://askubuntu.com/a/335989

rsync and sudo over SSH

To run rsync with root permission on remote machine, Use NOPASSWD line for all commands on /etc/sudoers.

Note:
Put the line after all other lines in the sudoers file! I first added the line after other user configurations, but it only worked when placed as absolutely last line in file on lubuntu 14.04.1.
$ sudo  visudo
    <username> ALL=(ALL) NOPASSWD: ALL

http://stackoverflow.com/questions/21659637/how-to-fix-sudo-no-tty-present-and-no-askpass-program-specified-error

How to backup with rsync

$ rsync -avz -e ssh --rsync-path="sudo rsync" <username>@<remote_host>:/path/on/remote/host/to/backup /path/on/local/host/to/save/backup

Using rsync for local backups

$ rsync -av --delete /Directory1/ /Directory2/
-a recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files.
-v verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of?
–delete
This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

Full Daily Backup with Syncing Hourly Backup

0 */2 * * * hourly_sync_backup.sh
0 */8 * * * daily_full_archive_backup.sh



$ vim hourly_sync_backup.sh
    rsync -avz -e ssh --rsync-path="sudo rsync" <username>@<remote_host>:/path/on/remote/host/to/backup /path/on/local/host/to/save/hourly_sync_backup

$ vim daily_full_archive_backup.sh
    rsync -avz -e ssh --rsync-path="sudo rsync" <username>@<remote_host>:/path/on/remote/host/to/backup /path/on/local/host/to/save/daily_full_archive_backup
    tar -P -cvjf /path/on/local/host/to/save/archives/daily_full_archive_backup_$(date +%Y_%m_%d).tar.bz2 /path/on/local/host/to/save/daily_full_archive_backup

http://www.howtogeek.com/135533/how-to-use-rsync-to-backup-your-data-on-linux/?PageSpeed=noscript

https://www.marksanborn.net/howto/use-rsync-for-daily-weekly-and-full-monthly-backups/

Sample ssh config file

$ vim  ~/.ssh/config

Host <alias-host-name>
    HostName <IP>
    User <username>
    IdentityFile ~/.ssh/<host>_key

Host gb
    HostName github.com
    User or
    IdentityFile ~/.ssh/github_key
$ ssh gb

Compress directory

$ tar -zcvf archive-name.tar.gz directory-name

Where:

-z : Compress archive using gzip program

-c: Create archive

-v: Verbose i.e display progress while creating archive

-f: Archive File name

http://www.cyberciti.biz/faq/how-do-i-compress-a-whole-linux-or-unix-directory/

How to add path of a program to $PATH environment variable?

Edit .bashrc in your home directory and add the following line:

$ vim ~/.bashrc
    export PATH="/path/to/dir:$PATH"
$ source ~/.bashrc

Could not open a connection to your authentication agent

$ eval `ssh-agent -s`

http://stackoverflow.com/a/17848593

How do I make ls show file sizes in megabytes?

$ ls -l --block-size=M
$ ls -lh

http://unix.stackexchange.com/a/64150

How to check one file exist on specific path ?

#!/usr/bin/env bash
if test -f /path/to/some/file; then
  echo "File exist"
fi

Or to check file dose not exist:

#!/usr/bin/env bash
if test ! -f /path/to/some/file; then
  echo "File not exist"
fi

Install SSH server and SSH client

$ sudo apt-get install openssh-server
$ sudo apt-get install openssh-client

https://wiki.debian.org/SSH

SSH connection with public key

$ vim ~/.ssh/authorized_keys
    # add public key

what does echo $$, $? $# mean ?

$ echo $$, $$, $#, $*

$$ is the PID of the current process.

$? is the return code of the last executed command.

$# is the number of arguments in $*

$* is the list of arguments passed to the current process

http://www.unix.com/shell-programming-and-scripting/75297-what-does-echo-mean.html

Make ZSH the default shell

chsh -s $(which zsh)

ulimit

The ulimit and sysctl programs allow to limit system-wide resource use.

This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

$ ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 63619
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 65536
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 63619
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
$ sudo sysctl -a

www.linuxhowtos.org/Tips and Tricks/ulimit.htm

locate

$ sudo apt-get install mlocate
$ updatedb
$ locate some-resource-name

Posting Form Data with cURL

Start your cURL command with curl -X POST and then add -F for every field=value you want to add to the POST:

$ curl -X POST -F 'username=or' -F 'password=pass' http://domain.tld/post

Diff

Eskil is a graphical tool to view the differences between files and directories

http://eskil.tcl.tk/index.html/doc/trunk/htdocs/download.html