• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Ubuntu

Deploying WordPress to a Vultr VM via command line

August 20, 2017 by Simon

Here is my guide on setting up WordPress on an Ubuntu server via the command line. Here is my recent guide on the wp-cli tool.

Read my guide on setting up a Vultr VM and installing NGINX web server and MySQL database. Use this link to create a Vultr account.  This guide assumes you have a working Ubuntu VM with NGINX web server, MySQL, and SSL.

Consider setting up an SSL certificate (read my guide here on setting up a  free SL certificate with Let’s Encrypt). Once again read my guide on Setting up a Vultr server. Also moving WordPress from CPanel to a self-managed server and securing Ubuntu in the cloud. Ensure you are backing up your server (read my guide on How to backup an Ubuntu VM in the cloud via crontab entries that trigger Bash Scripts, SSH, Rsync and email backup alerts).

Ensure MySQL is setup.

mysql --version
mysql  Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using  EditLine wrapper

Ensure your server is setup, firewall enabled (port 80 and 443 enabled), NGINX is installed and working.

Check NGINX version

sudo nginx -v
nginx version: nginx/1.13.3

Check NGINX Status

service nginx status
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-08-18 00:35:13 AEST; 3 days ago
 Main PID: 1276 (nginx)
    Tasks: 3
   Memory: 6.4M
      CPU: 3.218s
   CGroup: /system.slice/nginx.service
           ├─1276 nginx: master process /usr/sbin/nginx -g daemon on; master_process on
           ├─1277 nginx: worker process
           └─1278 nginx: cache manager process

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Check your PHP install status to confirm your setup, put this in a new PHP file (e.g /p.php) and load it to view PHP configuration and to verify PHP setup.

<?php
phpinfo()
?>

Loading PHP Configuration

First I edited NGINX configuration to allow WordPress to work.

location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
        index index.php index.html index.htm;
        proxy_set_header Proxy "";
}

Mostly I added this line.

try_files $uri $uri/ /index.php?q=$uri&$args;

I restated NGINX and PHP

nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart

If this config change is not made WordPress will not install or run.

Database

In order to setup WordPress, we need to create a MySQL database/database user before downloading WordPress from the command line.

From an ssh terminal type (and log in with your MySQL root password)

mysql -p
password:

Create a database (choose a database name, add random text).

mysql> create database databasemena123;
Query OK, 1 row affected (0.00 sec)

Create a user and assign them to the blog (choose a username, add random text)

grant all privileges on databasname123.* to 'blogusername123'@'localhost' identified by "siple-password";
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

If your password is simple you will get this warning.

ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

A 50+ char password with 10 digits and 10 numbers should be ok

mysql> grant all privileges on databasname123.* to 'blogusername123'@'localhost' identified by "xxxxxxxxxxxxxxxxxxxxxxxxremovedxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
Query OK, 0 rows affected, 1 warning (0.00 sec)

You can now apply the permissions and clear the permissions cache.

mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
exit;
Bye

Go to your /www folder on your server and run this command to download WordPress.

sudo curl -o wordpress.zip https://wordpress.org/latest.zip

% Total % Received % Xferd Average Speed Time Time Time Current
 Dload Upload Total Spent Left Speed
100 8701k 100 8701k 0 0 6710k 0 0:00:01 0:00:01 --:--:-- 6709k

You can now move any existing temporary index files in your /www folder

mv index.html oldindex.html
mv index.php oldindex.php
p.php oldp.php

ls -al
total 8724
drwxr-xr-x  2 root root    4096 Aug 21 11:17 .
drwxr-xr-x 27 root root    4096 Aug 13 22:27 ..
-rw-r--r--  1 root root      37 Jul 31 11:51 oldindex.html
-rw-r--r--  1 root root      37 Jul 31 11:51 oldindex.php
-rw-r--r--  1 root root      19 Aug 21 11:04 oldp.php
-rw-r--r--  1 root root 8910664 Aug 21 11:16 wordpress.zip

Now I can extract wordpress.zip

First, you need to install unzip

sudo apt-get install unzip

Now Unzip wordpress.zip

unzip wordpress.zip

At this point, I decided to remove all old index files on my website

rm -R /www/old*.*

The unzipped contents are in a sub folder called “wordpress”, we need to move the WordPress contents up a folder.

ls /www/ -al
total 8716
drwxr-xr-x  3 root root    4096 Aug 21 13:22 .
drwxr-xr-x 27 root root    4096 Aug 13 22:27 ..
drwxr-xr-x  5 root root    4096 Aug  2 21:02 wordpress
-rw-r--r--  1 root root 8911367 Aug 21 11:22 wordpress.zip

“wordpress” folder contents.

ls /www/wordpress -al
total 196
drwxr-xr-x  5 root root  4096 Aug  2 21:02 .
drwxr-xr-x  3 root root  4096 Aug 21 13:22 ..
-rw-r--r--  1 root root   418 Sep 25  2013 index.php
-rw-r--r--  1 root root 19935 Jan  2  2017 license.txt
-rw-r--r--  1 root root  7413 Dec 12  2016 readme.html
-rw-r--r--  1 root root  5447 Sep 27  2016 wp-activate.php
drwxr-xr-x  9 root root  4096 Aug  2 21:02 wp-admin
-rw-r--r--  1 root root   364 Dec 19  2015 wp-blog-header.php
-rw-r--r--  1 root root  1627 Aug 29  2016 wp-comments-post.php
-rw-r--r--  1 root root  2853 Dec 16  2015 wp-config-sample.php
drwxr-xr-x  4 root root  4096 Aug  2 21:02 wp-content
-rw-r--r--  1 root root  3286 May 24  2015 wp-cron.php
drwxr-xr-x 18 root root 12288 Aug  2 21:02 wp-includes
-rw-r--r--  1 root root  2422 Nov 21  2016 wp-links-opml.php
-rw-r--r--  1 root root  3301 Oct 25  2016 wp-load.php
-rw-r--r--  1 root root 34327 May 12 17:12 wp-login.php
-rw-r--r--  1 root root  8048 Jan 11  2017 wp-mail.php
-rw-r--r--  1 root root 16200 Apr  6 18:01 wp-settings.php
-rw-r--r--  1 root root 29924 Jan 24  2017 wp-signup.php
-rw-r--r--  1 root root  4513 Oct 14  2016 wp-trackback.php
-rw-r--r--  1 root root  3065 Aug 31  2016 xmlrpc.php

Remove the wordpress.zip in /www/

rm -R /www/wordpress.zip

Move all files from the /www/wordpress/ up a folder to /www/.

sudo mv /www/wordpress/* /www/

Now we can create and upload folder

mkdir /www/wp-content/content/

Apply permissions (or you can never upload to WordPress).

chmod 755 /www/wp-content/uploads/

I think I need to apply permissions here (to allow plugins to upload/update)

chmod 755 /www/wp-content/

Edit the wp-config-sample.php

sudo nano /www/wp-config-sample.php

Add your database name to the WordPress config.

Before:

define('DB_NAME', 'database_name_here');

After:

define('DB_NAME', 'databasemena123');

Add your database username and password to the WordPress config.

Before:

/** MySQL database username */
define('DB_USER', 'username_here');

/** MySQL database password */
define('DB_PASSWORD', 'password_here');

After:

/** MySQL database username */
define('DB_USER', 'blogusername123');

/** MySQL database password */
define('DB_PASSWORD', 'xxxxxxxxxxxxxxxxxxxxxxxxremovedxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');

Go to https://api.wordpress.org/secret-key/1.1/salt/ and copy the salts to your clipboard and replace this in your wp-config-sample.php

define('AUTH_KEY',         'put your unique phrase here');
define('SECURE_AUTH_KEY',  'put your unique phrase here');
define('LOGGED_IN_KEY',    'put your unique phrase here');
define('NONCE_KEY',        'put your unique phrase here');
define('AUTH_SALT',        'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT',   'put your unique phrase here');
define('NONCE_SALT',       'put your unique phrase here');

..with paste over whatever you generated (e.g)

define('AUTH_KEY',         '/[email protected];#Tr#6Tz6z^[LUdOvpNREUYT[|SmAN%%V% cyWk]-I%}E+t$#4c5n6vvp');
define('SECURE_AUTH_KEY',  'q_z-F-V#[[Lf<%_4,w#L_nyG|[email protected], YK0GR)R<Lk!.zqH< SH!4Q@,vXmMzG');
define('LOGGED_IN_KEY',    'o}c^Vb$ fyh,J6v9PyF)mdt4(Q_J}`FNOJ9.ag^i+UAUS?lmzwGzp<tV7W(wbb#:');
define('NONCE_KEY',        '<y3&QvdAz;48ZFJBAdsRmC~ejXWiOw{dTWF_)p?^E%D&GdtK2LHGZ|.^rvRF-l$m');
define('AUTH_SALT',        ',e{|+H`i6}[email protected]`kvkF??^?IC&?6W~9SHkqSxvX~z,fR Xn:2BExS@_X^');
define('SECURE_AUTH_SALT', '|g2(y}8olAv_b]>|^jR|-.VU_E[P~PoWprwTKu-mM9-:NEc#2HikST~84ad-Ksyx');
define('LOGGED_IN_SALT',   'sd1:-|ai{<Ferj,|$2+ <ietEFT9 xEe89$[8%{n@{FC(?[pF$oJ[[email protected]]');
define('NONCE_SALT',       '0D]kv-x.?_o^pwKtZI:g}~64vDb.Gdy1cBPQA{?;g(AE|0D)g:=1BrUbKF>T1oIv');

Now save changes to wp-config-sample.php

Rename the sample config file (to make it live)

sudo mv wp-config-sample.php wp-config.php

You can now load your website ( https://www.yourserver.com ) and finish the setup in the WordPress GUI.

Wordpress Setup GUI

WordPress should now be installed and you can log in.

Don’t forget to update your options – /wp-admin/options-general.php

I would recommend you review the options to prevent comment spam – /wp-admin/options-discussion.php

Also if you are using the twentyseventeen theme consider updating your header image (remove the pot plant) 0 /wp-admin/customize.php?theme=twentyseventeen&return=%2Fwp-admin%2Fthemes.php

Signup for a Vulr server here for as low as $2.5 a month or a Digital Ocean server ($10 free credit/2 months, signup for G Suite email on google here and read my guide here.

Read this guide on using the wp-cli tool to automate post-install.

I hope this helps.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.1 added wp-cli tool

Filed Under: Cloud, Server, Ubuntu, VM, Vultr, Wordpress Tagged With: comamnd line, instal, vm, wordpress

Useful Linux Terminal Commands

August 13, 2017 by Simon

Below are Ubuntu Linux commands I use often to setup, debug maintain servers.

Read this guide for Useful OSX Commands (for setting up Apache, PHP, MySQL, Adminer etc on OSX)

I recently moved my domain from a C-Panel hosted domain (and Email to Google G Suite (my guide here)) to a self-managed Digital Ocean domain (my LetsEncrypt Guide here, my Digital Ocean guide here, my AWS setup guide here, my Vultr setup guide here) and needed to transfer my WordPress site. Setup your own Digital Ocean Ubuntu server form $5  a month (get the first 2 months free by clicking here) or setup your own Vultr Ubuntu server for as low as $2.5/month by clicking here.

How to Reboot (from time to time when prompted with *** system restart required ***” messages appear).

sudo shutdown -r now

How to set up a  bash file (*.sh) as Executable.

chmod +X filename.sh

The file will now be executable.

Viewing your crontab (Windows Task Scheduler equiv)

crontab -e

Ping a port

nmap -p 80 google.com

Rename a folder

mv /www/oldname /www/newname

Set the owner of a folder

sudo chown -R www-data:www-data /wwwfolder/wp-content/uploads/2017/11/

Check the rsync port

nmap -p 873 theserver.com

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-19 10:34 AEST
Nmap scan report for theserver.com (xxx.xxx.xxx.xxx)
Host is up (0.00012s latency).
Other addresses for theserver.com (not scanned): xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
rDNS record for xxx.xxx.xxx.xxx: theserver.com
PORT    STATE SERVICE
873/tcp open  rsync

Run a file every 1 minute

*/1 * * * * /scripts/script1.sh

Show server name

hostname

How to verify patch status for Meltdown and Spectre

Read my guide here to install the patch here.

Verify Spectre and Meltdown patch status

dmesg | grep isolation && echo "patched :)" || echo "unpatched :("
[ 0.000000] Kernel/User page tables isolation: enabled
patched :)

or

sudo grep "cpu_insecure\|cpu_meltdown\|kaiser" /proc/cpuinfo && echo "patched :)" || echo "unpatched :("
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single kaiser fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
patched :)

Restart network

sudo /etc/init.d/networking restart

More here.

Show Operating System Name

hostnamectl ! grep "Operating System"
 Operating System: Ubuntu 10.04.3 LTS

Show installed Packages

sudo apt-get install apt-show-versions

Show all packages with “PHP” in the name.

sudo apt-show-versions | grep php | more
 
libapache2-mod-php7.0:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
libapache2-mod-php7.0:i386 not installed
php-common:all/xenial 1:55+ubuntu16.04.1+deb.sury.org+1 uptodate
php-xdebug:amd64/xenial 2.5.5-3+ubuntu16.04.1+deb.sury.org+1 uptodate
php-xdebug:i386 not installed
php7.0:all/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-cli:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-cli:i386 not installed
php7.0-common:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-common:i386 not installed
php7.0-curl:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-curl:i386 not installed
php7.0-dev:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-dev:i386 not installed
php7.0-fpm:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-fpm:i386 not installed
php7.0-gd:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-gd:i386 not installed
php7.0-imap:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-imap:i386 not installed
php7.0-intl:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-intl:i386 not installed
php7.0-json:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-json:i386 not installed
php7.0-ldap:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-ldap:i386 not installed
php7.0-mbstring:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-mbstring:i386 not installed
php7.0-mysql:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-mysql:i386 not installed
php7.0-opcache:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-opcache:i386 not installed
php7.0-pgsql:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-pgsql:i386 not installed
php7.0-phpdbg:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-phpdbg:i386 not installed
php7.0-pspell:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-pspell:i386 not installed
php7.0-readline:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-readline:i386 not installed
php7.0-recode:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-recode:i386 not installed
php7.0-snmp:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-snmp:i386 not installed
php7.0-tidy:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-tidy:i386 not installed
php7.0-xml:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-xml:i386 not installed
php7.0-zip:amd64/xenial 7.0.25-1+ubuntu16.04.1+deb.sury.org+1 uptodate
php7.0-zip:i386 not installed

Send Messages Other Logged In Users (CLI)

Show all user logged in

w
 20:11:51 up 1 day, 10:25,  2 users,  load average: 0.00, 0.04, 0.01
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     tty1                      20:03   31.00s  0.24s  0.20s -bash
user1    pts/0    123.123.123.123  20:09    0.00s  0.08s  0.01s w

Sent a message to usr1

echo "Hello User1" > /dev/pts/0

Sent a message to the root console

echo "Hello Admin" > /dev/tty1

Messages will appear at the bottom of the user’s console.

Processor

List processes in a tree view (… = removed)

ps -e --forest
  PID TTY          TIME CMD
    2 ?        00:00:00 kthreadd
    3 ?        00:00:00  \_ ksoftirqd/0
    5 ?        00:00:00  \_ kworker/0:0H
    7 ?        00:01:56  \_ rcu_sched
    8 ?        00:00:00  \_ rcu_bh
    9 ?        00:00:00  \_ migration/0
   35 ?        00:00:00  \_ vmstat
   37 ?        00:00:00  \_ ecryptfs-kthrea
   54 ?        00:00:00  \_ acpi_thermal_pm
   55 ?        00:00:00  \_ vballoon
   65 ?        00:00:00  \_ scsi_eh_0
   66 ?        00:00:00  \_ scsi_tmf_0
   67 ?        00:00:00  \_ scsi_eh_1
   68 ?        00:00:00  \_ scsi_tmf_1
   74 ?        00:00:00  \_ ipv6_addrconf
   36 ?        00:00:00  \_ kpsmoused
  456 ?        00:00:00  \_ iscsi_eh
...
    1 ?        00:00:02 init
...
  452 ?        00:00:00 upstart-file-br
  453 ?        00:00:00 dbus-daemon
...
 1489 ?        00:00:00 cron
 1514 ?        00:00:06 irqbalance
 1518 ?        00:00:00 sshd
11855 ?        00:00:00  \_ sshd
11914 pts/4    00:00:00      \_ bash
12008 pts/4    00:00:00          \_ ps
 1523 ?        00:00:09 php-fpm7.0
 1785 ?        00:00:03  \_ php-fpm7.0
 1786 ?        00:00:02  \_ php-fpm7.0
...
 1692 ?        00:01:52 mysqld
 ...
 1891 ?        00:00:53 fail2ban-server
...
 1956 ?        00:00:00 nginx
 1957 ?        00:00:02  \_ nginx
 1958 ?        00:00:03  \_ nginx
 1959 ?        00:00:01  \_ nginx
 1978 ?        00:00:24 ntpd
 2000 ?        00:00:00 systemd-logind
 2011 ?        00:03:24 redis-server
 ...

View major processes by usage/memory

ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head                                    Sat Sep 16 18:51:17 2017

  PID  PPID CMD                         %MEM %CPU
 1692     1 /usr/sbin/mysqld             5.0  0.0
 1523     1 php-fpm: master process (/e  1.0  0.0
 1662     1 /usr/bin/lxd --group lxd --  0.4  0.0
 1785  1523 php-fpm: pool www            0.4  0.0
 1786  1523 php-fpm: pool www            0.3  0.0
 1891     1 /usr/bin/python3 /usr/bin/f  0.3  0.0
 1957  1956 nginx: worker process        0.2  0.0
 1958  1956 nginx: worker process        0.2  0.0
11855  1518 sshd: [email protected]/4             0.1  0.0

Read more on ps here.

How big is a folder

du -hs ./foldername
412MB    ./foldername

Change FIle Create/Modify time’s

Change File Creation Date

SetFile -d '11/25/2019 23:00:00' ./file.doc

Change file modify/accessed time

touch -mt 201911282300 ./filename.doc

Tree

Tree needs to be installed first

sudo apt-get install tree

Show an ASCII representation of a folder structure

tree

Show files in a  structure

tree -a -h -v
.
├── [4.0K]  folder
├── [3.0K]  logfile.log
└── [1.7M]  zipfile.tgz

Show directories

tree -d
.
└── [4.0K]  subfolder

List all files and folders in a structure

tree -a -f -p -h  -l -R

Backup a  www folder

cp -rTv /www/ /backup/www

Common (Digital Ocean) Debugging commands

cat /etc/network/interfaces.d/50-cloud-init.cfg
cat /etc/network/interfaces
ip addr
ip route
uname -a
iptables -nvL --line-numbers
ls -l /lib/modules
cat /etc/udev/rules.d/70-persistent-net.rules

Networking

Display all TCP connections

netstat -at

Display all UDP connections

netstat -au

List all Listening Connections

netstat -l

Show all Network stats

netstat -s

Show all UDP Network stats

netstat -st

Show all TCP Network stats

netstat -su

Show network packets

netstat -i

Displaying raw info

netstat --statistics --raw

Show open ports

netstat -a | grep "LISTEN "

Upload a file to a remote server over ssh

scp /local/folder/local-file.zip [email protected]:/remote/server/destination-folder/

Zip files

Install zip

sudo apt-get install zip

Zip a  whole directory (with high compression)

zip -r -9 /folder/zipfile.zip /directory-to-zip

Zip a  whole directory (excluding *.tmp, *temp, *.bak and *.zip file types)

zip -r -9 /folder/zipfile.zip /directory-to-zip -x "*.tmp" -x "*.temp" -x"./backup/*.bak"-x "./backup/*.zip" -x "*promo*.mp4""

Zip Help

zip
Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.
Zip 3.0 (July 5th 2008). Usage:
zip [-options] [-b path] [-t mmddyyyy] [-n suffixes] [zipfile list] [-xi list]
  The default action is to add or replace zipfile entries from list, which
  can include the special name - to compress standard input.
  If zipfile and list are omitted, zip compresses stdin to stdout.
  -f   freshen: only changed files  -u   update: only changed or new files
  -d   delete entries in zipfile    -m   move into zipfile (delete OS files)
  -r   recurse into directories     -j   junk (don't record) directory names
  -0   store only                   -l   convert LF to CR LF (-ll CR LF to LF)
  -1   compress faster              -9   compress better
  -q   quiet operation              -v   verbose operation/print version info
  -c   add one-line comments        -z   add zipfile comment
  -@   read names from stdin        -o   make zipfile as old as latest entry
  -x   exclude the following names  -i   include only the following names
  -F   fix zipfile (-FF try harder) -D   do not add directory entries
  -A   adjust self-extracting exe   -J   junk zipfile prefix (unzipsfx)
  -T   test zipfile integrity       -X   eXclude eXtra file attributes
  -y   store symbolic links as the link instead of the referenced file
  -e   encrypt                      -n   don't compress these suffixes
  -h2  show more help

Backup NGINX

zip -r -9 /backup/nginx.zip /etc/nginx/ -x "*.tmp" -x "*.temp" -x"./backup/*.bak" -x "./backup/*.zip"

Unzip

Installing Unzip

sudo apt-get install unzip

Unzip a  file

unzip filename.zip

Updates

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Show Configured NGINX server names

grep "server_name" /etc/nginx/sites-available/default
server_name www.servername.com myservername.com localhost;

Services

Reload daemon services

systemctl daemon-reload

Verify the MongoDB service file exists

ls /etc/systemd/system | grep servivename

View the status of services

systemctl list-unit-files --type=service
UNIT FILE                                  STATE
accounts-daemon.service                    enabled
acpid.service                              disabled
[email protected]                    static
apt-daily-upgrade.service                  static
apt-daily.service                          static
...

Local Dump

locale -a
C
C.UTF-8
en_AG
en_AG.utf8
en_AU.utf8
en_BW.utf8
en_CA.utf8
en_DK.utf8
en_GB.utf8
en_HK.utf8
en_IE.utf8
en_IN
en_IN.utf8
en_NG
en_NG.utf8
en_NZ.utf8
en_PH.utf8
en_SG.utf8
en_US
en_US.iso88591
en_US.utf8
en_ZA.utf8
en_ZM
en_ZM.utf8
en_ZW.utf8
POSIX

Show All Defined Ports

cat /etc/services

Show defined rsync ports

cat /etc/services | grep rsync

Show listening ports (active connections)

netstat -plntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1707/mysqld
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      2023/redis-server 1
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1968/nginx
tcp        0      0 0.0.0.0:21              0.0.0.0:*               LISTEN      2097/pure-ftpd (SER
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1525/sshd
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      1968/nginx
tcp6       0      0 fe80::1:13128           :::*                    LISTEN      1708/lxd-bridge-pro
tcp6       0      0 :::80                   :::*                    LISTEN      1968/nginx
tcp6       0      0 :::21                   :::*                    LISTEN      2097/pure-ftpd (SER
tcp6       0      0 :::22                   :::*                    LISTEN      1525/sshd
udp        0      0 10.99.0.10:123          0.0.0.0:*                           1990/ntpd
udp        0      0 45.77.48.141:123        0.0.0.0:*                           1990/ntpd
udp        0      0 127.0.0.1:123           0.0.0.0:*                           1990/ntpd
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1990/ntpd
udp6       0      0 fe80::1:123             :::*                                1990/ntpd
udp6       0      0 ::1:123                 :::*                                1990/ntpd
udp6       0      0 :::123                  :::*                                1990/ntpd

Show service status

service --status-all

View your bash history for a past command (e.g “openssl”).

grep "openssl" ~/.bash_history
openssl req -new -newkey rsa:4096 -nodes -keyout fearby.key -out fearby.csr

View last 1 line of a file.

tail -n 1 index.html
</body></html>

Ping a server

ping -c 2 fearby.com

How to check system uptime (and load average)

uptime
> 12:18:48 up 1 min,  1 user,  load average: 0.30, 0.18, 0.17

Uptime in friendly format

uptime -p
up 23 hours, 42 minutes

The load averages at the end are the last 1, 5 min and 15 minutes.

The w command is handy for showing uptime information as-well as logged in users (The ‘w -i’  parameter -i is handy for seeing what IP people are logged in from).

w -i
> 12:22:41 up 5 min,  1 user,  load average: 0.00, 0.07, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    123.123.123.123    12:18    1.00s  0.07s  0.00s w

How to download a file

curl -o localfile.zip http://www.serverwhereiuploadedthefile.com/remotefile.zip

How to edit NGINX configuration

sudo nano /etc/nginx/nginx.conf

and

sudo nano /etc/nginx/sites-available/default

How to find a file

find / -name filename.ext

also

locate php.ini

Find contents in files (recursive)

grep -r "ahref" *

Find files by name and run a command on each

find -iname "index.html" -exec md5sum {} \;

Show differences in files

diff index.html index2.html
< <body>Loading <a href="http://simon.fearby.com/blog/">http://simon.fearby.com/blog/</a></body></html>
---
> <body>Loading <a href="https://www.fearby.com/blog/">https://www.fearby.com/blog/</a></body></html>

Show contents of file (e.g urls.txt)

cat urls.txt
http://www.server1.com
http://www.server2.com
http://www.server3.com
http://www.server4.com
http://www.server5.com

Download all files mentioned in a text file.

cat urls.txt | xargs wget –c
.. download 4 files ...
cat urls.txt | xargs wget –c

View all packages with updates

sudo /usr/lib/update-notifier/apt-check -p

Output:

cryptsetup-bin
libdns-export162
libisccfg140
mongodb-org-mongos
linux-libc-dev
libgdk-pixbuf2.0-0
tcpdump
bind9-host
dnsutils
nodejs
libpython3.5
python3.5
python3.5-minimal
libisc160
grub-legacy-ec2
libapparmor1
libplymouth4
mongodb-org-shell
ntp
libtidy5
libapparmor-perl
libisc-export160
liblwres141
libcryptsetup4
libgdk-pixbuf2.0-common
libdns162
apache2-bin
apparmor
libisccc140
mongodb-org
libpython3.5-stdlib
libbind9-140
libpython3.5-minimal
cryptsetup
mongodb-org-server

or (on ubuntu 16.04)

apt list --upgradable

Updates

Always backup your server’s configuration before updating.

Backup MySQL

mysqldump --all-databases > /backup/dump-$( date '+%Y-%m-%d_%H-%M-%S' ).sql -u root -p

Crontab

Add this to crontab -e to backup at 1 am every day

0 1 * * * /usr/bin/mysqldump --all-databases > /backup/mysql/dump-$( date '+%Y-%m-%d_%H-%M-%S' ).sql -u root -pmysqlpassword

/scripts/shrinkmysql.sh  script to shrink SQL files

sudo nano /scripts/shrinkmysql.sh
#!/bin/bash

tar -zcf /backup/mysql-$( date '+%Y-%m-%d_%H-%M-%S' ).tgz /backup/mysql/
rm /backup/mysql/*.sql

I had to add this to crontab

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin/:/sbin:/usr/sbin:/scripts/:

Cron job to shrink SQL dumps at 2am every day

0 2 * * * /bin/bash /scripts/shrinkmysql.sh > /dev/null 2>&1

Write to a single log file from the crontab at 2 am every day.

todo

Query a package (e.g. siege package)

sudo dpkg-query -l | grep siege *

How to set up a free SSL certificate (see my guide here).

How to set your timezone.

dpkg-reconfigure tzdata

How to restart PHP

sudo systemctl restart php7.0-fpm

How to show the time on the server

sudo hwclock --show

Reload and restart the NGINX configuration and web server.

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

JSON viewing program

Installing

sudo apt-get install jq

Using

wget wget https://downloads.wordpress.org/plugin/genesis-enews-extended.2.0.2.zip

or

curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

output from json tool

Below are utilities I use a lot.

ncdu file size utility

Installing

sudo apt-get install ncdu

Using

sudo ncdu /

pydf disk checking utility

Installing

sudo apt-get install pydf

Using

pydf

output from pydf tool

ntp timezone service

Installing

sudo apt-get install ntp

Using

ntp

Displaying startup processes

Installing

sudo apt-get install rcconf

Using

sudo rcconf

htop process manager.

Installing

sudo apt-get install htop

Using

htop

output from htop tool

Network Benchmarking (between two servers)

I use iperf to measure total bandwidth between two servers. You will need to allow port 5001 (TCP IPV4 and TCP IPV6 in and out) in any local firewalls and hosts GUIs.

Allow port 5001 on an ufw firewall (IN and OUT)

sudo ufw allow 5001

#I Set port 5001 firewall on my hosts (Digital  Ocean and Vultr GUI)

Deny IP

sudo ufw deny from 123.123.123.123

Allow port 22 access to known IP

sudo ufw allow from 123.123.123.123/24  to any port 22

Deny Outgoing Port

sudo ufw deny out 22

Allow out on port to known IP

sudo ufw allow out from 123.123.123.123 to any port 22

More securing Ubuntu in the cloud here.

Install iperf on the target and source Ubuntu server.

sudo apt-get install iperf

Run this on the listening server.

iperf -s

Run this on benchmarking server (and add the IP of the listening server).

iperf -c 123.123.123.123

Results

Screen dump of ipref -c ip
iperf benchmarking output

Testing concurrent connections to a web server with siege.

Install siege

sudo apt-get install siege

Benchmarking a HTTP server

./siege -t1m c10 'https://fearby.com'

#10 Concurent users

Benchmarking HTTPS sites

Install siege 4.0.2 ( steps here ).

Verify siege 4.0.2 is installed by running

siege -v

Now can benchmark https sites

./siege -t1m c10 'https:/thedomain.com'

View incoming connections on the target server

sudo netstat -tupn

I always increase my history size and tell it not to store duplicates.

Viewing your typed terminal history

history

Increasing your history size

HISTSIZE=10000
HISTCONTROL=ignoredups

How to Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

or

sudo apt-get update && sudo apt-get upgrade

Edit SSH authorized keys

sudo nano ~/.ssh/authorized_keys

Search file and show lines where text matches

grep -i "href" index.html
> <body>Loading <a href="http://simon.fearby.com/blog/">http://simon.fearby.com/blog/</a></body></html>

View packages with updates

/usr/lib/update-notifier/apt-check --human-readable
35 packages can be updated.
15 updates are security updates.

View Boot Text

dmesg

Automatic Monitoring (ever 1 second)

Active network connections

watch -n 1 'netstat -at'

Network Packets

watch -n 1 'netstat -i'

Free memory

watch -n 1 'free -m'

Memory breakdown

watch -n 1 'cat /proc/meminfo'

or

watch -n 1 'vmstat -s'

Monitor NGINX memory

watch -n 1 'ps axu |grep nginx'

more soon…

Read this guide for Useful OSX Commands (for setting up Apache, PHP, MySQL, Adminer etc on OSX)

50 most useful Linux commands (view here).

View other Linux command informing sites here, here and here.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.8 Show users and senda messages to other users

v.71 Show Operating System Name

v1.7 added Meltdown and Spectre patch information

v1.6 Removed -x fom zip directory

V1.5 Restart Network

v1.4 added Useful OSX Commands

Filed Under: Terminal, Ubuntu, VM Tagged With: commands, Linux, terminal

Securing Ubuntu in the cloud

August 9, 2017 by Simon

It is easy to deploy servers to the cloud within a few minutes, you can have a cloud-based server that you (or others can use). ubuntu has a great guide on setting up basic security issues but what do you need to do.

If you do not secure your server expects it to be hacked into. Below are tips on securing your cloud server.

First, read more on scanning your server with Lynis security scan.

Always use up to date software

Always use update software, malicious users can detect what software you use with sites like shodan.io (or use port scan tools) and then look for weaknesses from well-published lists (e.g WordPress, Windows, MySQL, node, LifeRay, Oracle etc). People can even use Google to search for login pages or sites with passwords in HTML (yes that simple).  Once a system is identified by a malicious user they can send automated bots to break into your site (trying millions of passwords a day) or use tools to bypass existing defences (Security researcher Troy Hunt found out it’s child’s play).

Portscan sites like https://mxtoolbox.com/SuperTool.aspx?action=scan are good for knowing what you have exposed.

You can also use local programs like nmap to view open ports

Instal nmap

sudo apt-get install nmap

Find open ports

nmap -v -sT localhost

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-08 23:57 AEST
Initiating Connect Scan at 23:57
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 9101/tcp on 127.0.0.1
Discovered open port 9102/tcp on 127.0.0.1
Discovered open port 9103/tcp on 127.0.0.1
Completed Connect Scan at 23:57, 0.05s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00020s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
3306/tcp open  mysql
9101/tcp open  jetdirect
9102/tcp open  jetdirect
9103/tcp open  jetdirect

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Limit ssh connections

Read more here.

Use ufw to set limits on login attempts

sudo ufw limit ssh comment 'Rate limit hit for openssh server'

Only allow known IP’s access to your valuable ports

sudo ufw allow from 123.123.123.123/32 to any port 22

Delete unwanted firewall rules

sudo ufw status numbered
sudo ufw delete 8

Only allow known IP’s to certain ports

sudo ufw allow from 123.123.123.123 to any port 80/tcp

Also, set outgoing traffic to known active servers and ports

sudo ufw allow out from 123.123.123.123 to any port 22

Don’t use weak/common Diffie-Hellman key for SSL certificates, more information here.

openssl req -new -newkey rsa:4096 -nodes -keyout server.key -out server.csr
 
Generating a 4096 bit RSA private key
...

More info on generating SSL certs here and setting here and setting up Public Key Pinning here.

Intrusion Prevention Software

Do run fail2ban: Guide here https://www.linode.com/docs/security/using-fail2ban-for-security

I use iThemes Security to secure my WordPress and block repeat failed logins from certain IP addresses.

iThemes Security can even lock down your WordPress.

You can set iThemes to auto lock out users on x failed logins

Remember to use allowed whitelists though (it is so easy to lock yourself out of servers).

Passwords

Do have strong passwords and change the root password provided by the hosts. https://howsecureismypassword.net/ is a good site to see how strong your password is from brute force password attempts. https://www.grc.com/passwords.htm is a good site to obtain a strong password.  Do follow Troy Hunt’s blog and twitter account to keep up to date with security issues.

Configure a Firewall Basics

You should install a firewall on your Ubuntu and configure it and also configure a firewall with your hosts (e.g AWS, Vultr, Digital Ocean).

Configure a Firewall on AWS

My AWS server setup guide here. AWS allow you to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

Configure a Firewall on Digital Ocean

Configuring a firewall on Digital Ocean (create a $5/m server here).  You can configure your Digital Ocean droplet firewall by clicking Droplet, Networking then Manage Firewall after logging into Digital Ocean.

Configure a Firewall on Vultr

Configuring a firewall on Vultr (create a $2.5/m server here).

Don’t forget to set IP rules for IPV4 and IPV6, Only set the post you need to allow and ensure applications have strong passwords.

Ubuntu has a firewall built in (documentation).

sudo ufw status

Enable the firewall

sudo ufw enable

Adding common ports

sudo ufw allow ssh/tcp
sudo ufw logging on
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 53
sudo ufw allow 443
sudo ufw allow 873
sudo ufw enable
sudo ufw status
sudo ufw allow http
sudo ufw allow https

Add a whitelist for your IP (use http://icanhazip.com/ to get your IP) to ensure you won’t get kicked out of your server.

sudo ufw allow from 123.123.123.123/24 to any port 22

More help here.  Here is a  good guide on ufw commands. Info on port numbers here.

https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

If you don’t have a  Digital Ocean server for $5 a month click here and if a $2.5 a month Vultr server here.

Backups

rsync is a good way to copy files to another server or use Bacula

sudo apt install bacula

Basics

Initial server setup guide (Digital Ocean).

Sudo (admin user)

Read this guide on the Linux sudo command (the equivalent if run as administrator on Windows).

Users

List users on an Ubuntu OS (or compgen -u)

cut -d: -f1 /etc/passwd

Common output

cut -d: -f1 /etc/passwd
root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy
www-data
backup
list
irc
gnats
nobody
systemd-timesync
systemd-network
systemd-resolve
systemd-bus-proxy
syslog
_apt
lxd
messagebus
uuidd
dnsmasq
sshd
pollinate
ntp
mysql
clamav

Add User

sudo adduser new_username

e.g

sudo adduser bob
Adding user `bob' ...
Adding new group `bob' (1000) ...
Adding new user `bob' (1000) with group `bob' ...
Creating home directory `/home/bob' ...
etc..

Add user to a group

sudo usermod -a -G MyGroup bob

Show users in a group

getent group MyGroup | awk -F: '{print $4}'

This will show users in a group

Remove a user

sudo userdel username
sudo rm -r /home/username

Rename user

usermod -l new_username old_username

Change user password

sudo passwd username

Groups

Show all groups

compgen -ug

Common output

compgen -g
root
daemon
bin
sys
adm
tty
disk
lp
mail
proxy
sudo
www-data
backup
irc
etc

You can create your own groups but first, you must be aware of group ids

cat /etc/group

Then you can see your systems groups and ids.

Create a group

groupadd -g 999 MyGroup

Permissions

Read this https://help.ubuntu.com/community/FilePermissions

How to list users on Ubuntu.

Read more on setting permissions here.

Chmod help can be found here.

Install Fail2Ban

I used this guide on installing Fail2Ban.

apt-get install fail2ban

Check Fail2Ban often and add blocks to the firewall of known bad IPs

fail2ban-client status

Best practices

Ubuntu has a guide on basic security setup here.

Startup Processes

It is a good idea to review startup processes from time to time.

sudo apt-get install rcconf
sudo rcconf

Accounts

  • Read up on the concept of least privilege access for apps and services here.
  • Read up on chmod permissions.

Updates

Do update your operating system often.

sudo apt-get update
sudo apt-get upgrade

Minimal software

Only install what software you need

Exploits and Keeping up to date

Do keep up to date with exploits and vulnerabilities

  • Follow 0xDUDE on twitter.
  • Read the GDI.Foundation page.
  • Visit the Exploit Database
  • Vulnerability & Exploit Database
  • Subscribe to the Security Now podcast.

Secure your applications

  • NodeJS: Enable logging in applications you install or develop.

Ban repeat Login attempts with FailBan

Fail2Ban config

sudo nano /etc/fail2ban/jail.conf
[sshd]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3

Hosts File Hardening

sudo nano /etc/host.conf

Add

order bind,hosts
nospoof on

Add a whitelist with your ip on /etc/fail2ban/jail.conf (see this)

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not                          
# ban a host which matches an address in this list. Several addresses can be                             
# defined using space separator.
                                                                         
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8

Restart the service

sudo service fail2ban restart
sudo service fail2ban status

Intrusion detection (logging) systems

Tripwire will not block or prevent intrusions but it will log and give you a heads up with risks and things of concern

Install Tripwire.

sudo apt-get install tiger tripwire

Running Tripwire

sudo tiger

This will scan your system for issues of note

sudo tiger
Tiger UN*X security checking system
   Developed by Texas A&M University, 1994
   Updated by the Advanced Research Corporation, 1999-2002
   Further updated by Javier Fernandez-Sanguino, 2001-2015
   Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
   Covered by the GNU General Public License (GPL)

Configuring...

Will try to check using config for 'x86_64' running Linux 4.4.0-89-generic...
--CONFIG-- [con005c] Using configuration files for Linux 4.4.0-89-generic. Using
           configuration files for generic Linux 4.
Tiger security scripts *** 3.2.3, 2008.09.10.09.30 ***
20:42> Beginning security report for simon.
20:42> Starting file systems scans in background...
20:42> Checking password files...
20:42> Checking group files...
20:42> Checking user accounts...
20:42> Checking .rhosts files...
20:42> Checking .netrc files...
20:42> Checking ttytab, securetty, and login configuration files...
20:42> Checking PATH settings...
20:42> Checking anonymous ftp setup...
20:42> Checking mail aliases...
20:42> Checking cron entries...
20:42> Checking 'services' configuration...
20:42> Checking NFS export entries...
20:42> Checking permissions and ownership of system files...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Checking for indications of break-in...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Performing rootkit checks...
20:42> Performing system specific checks...
20:46> Performing root directory checks...
20:46> Checking for secure backup devices...
20:46> Checking for the presence of log files...
20:46> Checking for the setting of user's umask...
20:46> Checking for listening processes...
20:46> Checking SSHD's configuration...
20:46> Checking the printers control file...
20:46> Checking ftpusers configuration...
20:46> Checking NTP configuration...
20:46> Waiting for filesystems scans to complete...
20:46> Filesystems scans completed...
20:46> Performing check of embedded pathnames...
20:47> Security report completed for simon.
Security report is in `/var/log/tiger/security.report.simon.170809-20:42'.

My Output.

sudo nano /var/log/tiger/security.report.username.170809-18:42

Security scripts *** 3.2.3, 2008.09.10.09.30 ***
Wed Aug  9 18:42:24 AEST 2017
20:42> Beginning security report for username (x86_64 Linux 4.4.0-89-generic).

# Performing check of passwd files...
# Checking entries from /etc/passwd.
--WARN-- [pass014w] Login (bob) is disabled, but has a valid shell.
--WARN-- [pass014w] Login (root) is disabled, but has a valid shell.
--WARN-- [pass015w] Login ID sync does not have a valid shell (/bin/sync).
--WARN-- [pass012w] Home directory /nonexistent exists multiple times (3) in
         /etc/passwd.
--WARN-- [pass012w] Home directory /run/systemd exists multiple times (2) in
         /etc/passwd.
--WARN-- [pass006w] Integrity of password files questionable (/usr/sbin/pwck
         -r).

# Performing check of group files...

# Performing check of user accounts...
# Checking accounts from /etc/passwd.
--WARN-- [acc021w] Login ID dnsmasq appears to be a dormant account.
--WARN-- [acc022w] Login ID nobody home directory (/nonexistent) is not
         accessible.

# Performing check of /etc/hosts.equiv and .rhosts files...

# Checking accounts from /etc/passwd...

# Performing check of .netrc files...

# Checking accounts from /etc/passwd...

# Performing common access checks for root (in /etc/default/login, /securetty, and /etc/ttytab...
--WARN-- [root001w] Remote root login allowed in /etc/ssh/sshd_config

# Performing check of PATH components...
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
# Only checking user 'root'

# Performing check of anonymous FTP...

# Performing checks of mail aliases...
# Checking aliases from /etc/aliases.

# Performing check of `cron' entries...
--WARN-- [cron005w] Use of cron is not restricted

# Performing check of 'services' ...
# Checking services from /etc/services.
--WARN-- [inet003w] The port for service ssmtp is also assigned to service
         urd.
--WARN-- [inet003w] The port for service pipe-server is also assigned to
         service search.

# Performing NFS exports check...

# Performing check of system file permissions...
--ALERT-- [perm023a] /bin/su is setuid to `root'.
--ALERT-- [perm023a] /usr/bin/at is setuid to `daemon'.
--ALERT-- [perm024a] /usr/bin/at is setgid to `daemon'.
--WARN-- [perm001w] The owner of /usr/bin/at should be root (owned by daemon).
--WARN-- [perm002w] The group owner of /usr/bin/at should be root.
--ALERT-- [perm023a] /usr/bin/passwd is setuid to `root'.
--ALERT-- [perm024a] /usr/bin/wall is setgid to `tty'.

# Checking for known intrusion signs...
# Testing for promiscuous interfaces with /bin/ip
# Testing for backdoors in inetd.conf

# Performing check of files in system mail spool...

# Performing check for rookits...
# Running chkrootkit (/usr/sbin/chkrootkit) to perform further checks...
--WARN-- [rootkit004w] Chkrootkit has detected a possible rootkit installation
Possible Linux/Ebury - Operation Windigo installetd

# Performing system specific checks...
# Performing checks for Linux/4...

# Checking boot loader file permissions...
--WARN-- [boot02] The configuration file /boot/grub/menu.lst has group
         permissions. Should be 0600
--FAIL-- [boot02] The configuration file /boot/grub/menu.lst has world
         permissions. Should be 0600
--WARN-- [boot06] The Grub bootloader does not have a password configured.

# Checking for vulnerabilities in inittab configuration...

# Checking for correct umask settings for init scripts...
--WARN-- [misc021w] There are no umask entries in /etc/init.d/rcS

# Checking Logins not used on the system ...

# Checking network configuration
--FAIL-- [lin013f] The system is not protected against Syn flooding attacks
--WARN-- [lin017w] The system is not configured to log suspicious (martian)
         packets

# Verifying system specific password checks...

# Checking OS release...
--WARN-- [osv004w] Unreleased Debian GNU/Linux version `stretch/sid'

# Checking installed packages vs Debian Security Advisories...

# Checking md5sums of installed files

# Checking installed files against packages...
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/udev/hwdb.bin' does not belong to any package.

# Performing check of root directory...

# Checking device permissions...
--WARN-- [dev003w] The directory /dev/block resides in a device directory.
--WARN-- [dev003w] The directory /dev/char resides in a device directory.
--WARN-- [dev003w] The directory /dev/cpu resides in a device directory.
--FAIL-- [dev002f] /dev/fuse has world permissions
--WARN-- [dev003w] The directory /dev/hugepages resides in a device directory.
--FAIL-- [dev002f] /dev/kmsg has world permissions
--WARN-- [dev003w] The directory /dev/lightnvm resides in a device directory.
--WARN-- [dev003w] The directory /dev/mqueue resides in a device directory.
--FAIL-- [dev002f] /dev/rfkill has world permissions
--WARN-- [dev003w] The directory /dev/vfio resides in a device directory.

# Checking for existence of log files...
--FAIL-- [logf005f] Log file /var/log/btmp permission should be 660
--FAIL-- [logf007f] Log file /var/log/messages does not exist

# Checking for correct umask settings for user login shells...
--WARN-- [misc021w] There is no umask definition for the dash shell
--WARN-- [misc021w] There is no umask definition for the bash shell

# Checking symbolic links...

# Performing check of embedded pathnames...
20:47> Security report completed for username.

More on Tripwire here.

Hardening PHP

Hardening PHP config (and backing the PHP config it up), first create an info.php file in your website root folder with this info

<?php
phpinfo()
?>

Now look for what PHP file is loadingPHP Config

Back that your PHP config file

TIP: Delete the file with phpinfo() in it as it is a security risk to leave it there.

TIP: Read the OWASP cheat sheet on using PHP securely here and securing php.ini here.

Some common security changes

file_uploads = On
expose_php = Off
error_reporting = E_ALL
display_errors          = Off
display_startup_errors  = Off
log_errors              = On
error_log = /php_errors.log
ignore_repeated_errors  = Off

Don’t forget to review logs, more config changes here.

Antivirus

Yes, it is a good idea to run antivirus in Ubuntu, here is a good list of antivirus software

I am installing ClamAV as it can be installed on the command line and is open source.

sudo apt-get install clamav

ClamAV help here.

Scan a folder

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Setup auto-update antivirus definitions

sudo dpkg-reconfigure clamav-freshclam

I set auto updates 24 times a day (every hour) via daemon updates.

tip: Download manual antivirus update definitions. If you only have a 512MB server your update may fail and you may want to stop fresh claim/php/nginx and mysql before you update to ensure the antivirus definitions update. You can move this to a con job and set this to update at set times over daemon to ensure updates happen.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v
Current working dir is /var/lib/clamav
Max retries == 5
ClamAV update process started at Tue Aug  8 22:22:02 2017
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1152
Software version from DNS: 0.99.2
Retrieving http://db.au.clamav.net/main.cvd
Trying to download http://db.au.clamav.net/main.cvd (IP: 193.1.193.64)
Downloading main.cvd [100%]
Loading signatures from main.cvd
Properly loaded 4566249 signatures from new main.cvd
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Querying main.58.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/daily.cvd
Trying to download http://db.au.clamav.net/daily.cvd (IP: 193.1.193.64)
Downloading daily.cvd [100%]
Loading signatures from daily.cvd
Properly loaded 1742284 signatures from new daily.cvd
daily.cvd updated (version: 23644, sigs: 1742284, f-level: 63, builder: neo)
Querying daily.23644.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/bytecode.cvd
Trying to download http://db.au.clamav.net/bytecode.cvd (IP: 193.1.193.64)
Downloading bytecode.cvd [100%]
Loading signatures from bytecode.cvd
Properly loaded 66 signatures from new bytecode.cvd
bytecode.cvd updated (version: 308, sigs: 66, f-level: 63, builder: anvilleg)
Querying bytecode.308.82.1.0.C101C140.ping.clamav.net
Database updated (6308599 signatures) from db.au.clamav.net (IP: 193.1.193.64)

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart 

sudo /etc/init.d/clamav-freshclam start

Manual scan with a bash script

Create a bash script

mkdir /script
sudo nano /scripts/updateandscanav.sh

# Include contents below.
# Save and quit

chmod +X /scripts/updateandscanav.sh

Bash script contents to update antivirus definitions.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart

sudo /etc/init.d/clamav-freshclam start

sudo clamscan --max-filesize=3999M --max-scansize=3999M -v -r /

Edit the crontab to run the script every hour

crontab -e
1 * * * * /bin/bash /scripts/updateandscanav.sh > /dev/null 2>&1

Uninstalling Clam AV

You may need to uninstall Clamav if you don’t have a lot of memory or find updates are too big.

sudo apt-get remove --auto-remove clamav
sudo apt-get purge --auto-remove clamav

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Other

  • Read this awesome guide.
  • install Fail2Ban
  • Do check your log files if you suspect suspicious activity.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.92 added hardening a linux server link

Filed Under: Ads, Advice, Analitics, Analytics, Android, API, App, Apple, Atlassian, AWS, Backup, BitBucket, Blog, Business, Cache, Cloud, Community, Computer, CoronaLabs, Cost, CPI, DB, Development, Digital Ocean, DNS, Domain, Email, Feedback, Firewall, Free, Git, GitHub, GUI, Hosting, Investor, IoT, JIRA, LetsEncrypt, Linux, Malware, Marketing, mobile app, Monatization, Monetization, MongoDB, MySQL, Networking, NGINX, NodeJS, NoSQL, OS, Planning, Project, Project Management, Psychology, push notifications, Raspberry Pi, Redis, Route53, Ruby, Scalability, Scalable, Security, SEO, Server, Share, Software, ssl, Status, Strength, Tech Advice, Terminal, Transfer, Trello, Twitter, Ubuntu, Uncategorized, Video Editing, VLOG, VM, Vultr, Weakness, Web Design, Website, Wordpress Tagged With: antivirus, brute force, Firewall

Moving a CPanel domain with email to a self managed VPS and Gmail

August 3, 2017 by Simon

Below is my guide for moving away from NetRegistry CPanel domain to a self-managed server and GSuite email.

I have had www.fearby.com since 1999 on three CPanel hosts (superwerbhost in the US, Jumba in Australia, Uber in Australia (NetRegistry have acquired Uber and performance at the time of writing is terrible)). I was never informed by Uber of the sale but my admin portal was moved from one host to another and each time performance degraded. I tried to speed up WordPress by optimizing images, installing cache plugins but nothing worked, pages were loading in around 24 seconds on https://www.webpagetest.org.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I had issues with a CPanel domain on the hosts (Uber/Netregistry) as they were migrating domains and the NetRegstry chat rep said I needed to phone Uber for support. No thanks, I’m going self-managed and saving a dollar.

I decided to take ownership of my slow domain and setup my own VM and direct web traffic to it and redirect email to GMail (I have done this before).  I have setup Digital Ocean VM’s (Ubuntu and Centos), Vultr VM’s and AWS VM’s.

I have also had enough of Resource Limit Reached messages with CPanel and I can’t wait to…

  • not have a slow WordPress.
  • setup my own server (not a slow hosts server).
  • spend $5 less (we currently pay $25 for a CPanel website with 20GB storage total)
  • get a faster website (sub 24 seconds load time).
  • larger email mailboxes (30GB each).
  • Generate my own “SSL Labs A+ rated” certificate for $10 a year instead of $150 a year for an “SSL Labs C rated” SSL certificate from my existing hosts.

Backup

I have about 10 email accounts on my CPanel domain (using 14GB) and 2x WordPress sites.  I want to backup my emails with (Outlook Export and Thunderbird Profile backup) and backup my domain file(s) a few times before I do anything.  Once DNS is set in motion no server waits.

The Plan

Once everything is backed up I intend to setup a $5 a month Vulr VM and redirect all mail to Google G Suite (I have redirected mail before).

I will setup a Vultr web server in Sydney (following my guide here), buy an  SSL certificate from Namecheap and move my WordPress sites.

Rough Plan

  • Reduce email accounts from 10x to 3x
  • Backup emails (twice with ThunderBird and Outlook).
  • Setup A Ubuntu V on Vultr.
  • Signup for Google G Suite Trial.
  • Transfer my domain to Namecheap.
  • Link to domain DNS to Vultr
  • Link to domain MX records to Google Email.
  • Transfer website.
  • Setup emails on google.
  • Restore WordPress.
  • Go live.
  • Downgrade to personal G Suite before the trial expires
  • Close down the old server.

Signing up for Google G Suite

I visited https://gsuite.google.com/ and started creating an account.

Get 20% off your first year by signing up for Google G Suite using this link: https://goo.gl/6vpuMm

Screenshots of Google G Suite setup

I created a link between G Suite and an existing GMail account.

More screenshots of Google G suite setup

Now I can create the admin account.

Picture of G suite asking how i will log in

Tip: Don’t use any emails that are linked as secondary emails with any Google services (this won’t be allowed). It’s s a well-known issue that you cannot add users emails who are linked to Google services (even as backup emails for Gmail, detach the email before adding it). Read more here.

Google G suite did not like my email provided

Final setup steps.

Final G suite setup screenshots.

Now I can add email accounts to G Suite.

G Suite said im all ready to add users

Adding email users to G Suite.

G Suite adding users

The next thing I had to do was upload a file to my domain to verify I own the domain (DNS verification is also an option).

I must say the setup and verify steps are quite easy to follow on G Suite.

Time to backup our existing CPanel site.

Screenshot of Cpanel users

Backup Step 1 (hopefully I won’t need this)

I decided to grab a complete copy of my CPanel domain with domains, databases and email accounts. This took 24 hours.

CPanel backup screenshot

Backup Step 2 (hopefully I won’t need this)

I download all mail via IMAP in Outlook and Mozilla Thunderbird and export it (Outlook Export and Thunderbird Profile backup). Google have IMAP instructions here.

DNS Changes at Namecheap

I obtained my domain EPP code from my CPanel hosts and transferred the domain name to Namecheap.

Namecheap was even nice enough to set my DNS point to my existing domain so I did not have to rush a move before DNS propagation.

P.S The Namecheap Chat Staff and Namecheap  Mobile App is awesome.

NameCheap DNS

Having backed up everything I logged into Namecheap and set my DNS to “NameCheap BasicDNS” and then went “Advanced DNS” and set appropriate DNS records for my domain. This assumes you have setup a VM with IPV4 and IPV6 (follow my guide here).

  • A Record @ IPV4_OF_MY_VULTR_SERVER
  • A Record www IPV4_OF_MY_VULTR_SERVER
  • A Record ftp IPV4_OF_MY_VULTR_SERVER
  • AAAA Record @ IPV6_OF_MY_VULTR_SERVER
  • AAAA Record www IPV6_OF_MY_VULTR_SERVER
  • AAAA Record ftp IPV6_OF_MY_VULTR_SERVER
  • C Name www fearby.com

The Google G Suite also asked me to add these following MX records to the DNS records.

  • MX Record @ ASPMX.L.GOOGLE.COM. 1
  • MX Record @ ASPMX1.L.GOOGLE.COM. 5
  • MX Record @ ASPMX2.L.GOOGLE.COM. 5
  • MX Record @ ASPMX3.L.GOOGLE.COM. 10
  • MX Record @ ASPMX4.L.GOOGLE.COM. 10

Then it was a matter of telling Google DNS changes were made (once DNS has replicated across the US).

My advice is to set DNS changes before bed as it can take 12 hours.

Sites like https://www.whatsmydns.net/ are great for keeping track of DNS replication.

Transferring WordPress

I logged into the CPanel and exported my WordPress Database (34MB SQL file).

I had to make the following PHP.ini changes to allow the larger file size restore uploads with the Adminer utility (default is 2mb). I could not get the server side adminer.sls.gz option to restore the database?

post_max_size = 50M
upload_max_filesize = 50M

# do change back to 2MB after you restore the files to prevent DOS attacks.

I had to make the following changes to nginx.conf (to prevent 404 errors on the database upload)

client_max_body_size 50M;
# client_max_body_size 2M; Reset when done

I also had to make these changes to NGINX (sites-available/default) to allow WordPress to work

# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

location / {
        # try_files $uri $uri/ =404;
        try_files $uri $uri/ /index.php?q=$uri&$args;
        index index.php index.html index.htm;
        proxy
}

I had a working MySQL (I followed my guide here).

Adminer is the best PHP MySQL management utility (beats PhpMyAdmin hands down).

Restart NGINX and PHP

nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart

I had an error on database import, a non-descript error in script line 1 (error hint here).

A simple search and replace in the SQL fixed it.

Once I had increased PHP uploads to 50M and Nginx I was able to upload my database backup with Adminer  (just remember to import to the created database that matches. the wp-config.php. Also, ensure your WordPress content is in place too.

The only other problem I had was WordPress gave an “Error 500” so moved   few plugins an all was good.

Importing Old Email

I was able to use the Google G Suite tools to import my old Mail (CPanel IMAP to Google IMAP).

Import IMAP mail to GMail

I love root access on my own server now, goodbye CPanel “Usage Limit Exceeded” errors (I only had light traffic on my site).

My self-hosted WordPress is a lot snappier now, my server has plenty of space (and only costs $0.007c and hour for 1x CPU, 1GB ram, 25GB SSD storage and 1000GB data transfer quota). I use the htop command to view system processor and disk space usage.

I can now have more space for content and not be restricted by tight hosts disk quotas or slow shared servers.  I use the pydf command to view dis space.

pydf
Filesystem Size  Used

Avail

 Use%                                                    Mounted on
/dev/vda1   25G 3289M

20G

 13.1 [######..........................................] /
/www/wp-content#

I use ncdu to view folder usage.

Installing ncdu

sudo apt-get install ncdu
Reading package lists... Done
Building dependency tree
Reading state information... Done
ncdu is already the newest version (1.11-1build1).
0 upgraded, 0 newly installed, 0 to remove and 58 not upgraded.

Type ncdu in the folder you want to browse under.

ncdu

You can arrow up and down folder structures and view folder/file usage.

SSL Certificate

I am setting up a new multi year SS cert now, I will update this guide later.  I had to read my SSL guide with Digital Ocean here.

I generated some certificate on my server

cd ~/
kdir sslcsrmaster4096
cd sslcsrmaster4096/
openssl req -new -newkey rsa:4096 -nodes -keyout domain.key -out domain.csr

Sample output for  a new certificate

openssl req -new -newkey rsa:4096 -nodes -keyout dummy.key -out dummy.csr
Generating a 4096 bit RSA private key
.................................................................................................++
......++
writing new private key to 'dummy.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]: AU
State or Province Name (full name) [Some-State]: NSW
Locality Name (eg, city) []:Tamworth
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Dummy Org
Organizational Unit Name (eg, section) []: Dummy Org Dept
Common Name (e.g. server FQDN or YOUR name) []: DummyOrg
Email Address []: [email protected]

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: password
An optional company name []: DummyCO
[email protected]:~/sslcsrmaster4096# cat dummy.csr
-----BEGIN CERTIFICATE REQUEST-----
MIIFAjCCAuoCAQAwgYsxCzAJBgNVBAYTAkFVMQwwCgYDVQQIDANOU1cxETAPBgNV
BAcMCFRhbXdvcnRoMRIwEAYDVQQKDAlEdW1teSBPcmcxFzAVBgNVBAsMDkR1bW15
IE9yZyBEZXB0MREwDwYDVQQDDAhEdW1teU9yZzEbMBkGCSqGSIb3DQEJARYMbWVA
ZHVtbXkub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA6PUtWkRl
+gL0Hx354YuJ5Sul2Xh+ljILSlFoHAxktKlE+OJDJAtUtVQpo3/F2rGTJWmmtef+
shortenedoutput
swrUzpBv8hjGziPoVdd8qdAA2Gh/Y5LsehQgyXV1zGgjsi2GN4A=
-----END CERTIFICATE REQUEST-----

I then uploaded the certificate to Namecheap for an SSL cert registration.

I selected DNS C Name record as a way to verify I own my domain.

I am now waiting for Namecheap to verify my domain

End of the Google G Suite Business Trial

Before the end of the 14-day trial, you will need to add billing details to keep the email working.

At this stage, you can downgrade from a $10/m business account per user to a $5/m per user account if you wish. The only loss would be storage and google app access.

Get 20% off your first year by signing up for Google G Suite using this link: https://goo.gl/6vpuMm

Before your trial ends, add your payment details and downgrade from $10/user a month business prices to $5/iser a month individual if needed.

G Suite Troubleshooting

I was able to access new G Suite email account via gmail.com but not via Outlook 2015? I reset the password, followed the google troubleshooting guide and used the official incoming and outgoing settings but nothing worked.

troubleshooting 1

Google phone support suggested I enable less secure connection settings as Google firewall may be blocking Outlook. I know the IMAP RFC is many years old but I doubt Microsoft are talking to G Suite in a lazy manner.

Now I can view my messages and I can see one email that said I was blocked by the firewall. Google phone support and faqs don’t say why Outlook 2015 SSL based IMAP was blocked?

past email

Conclusion

Thanks to my wife who put up with my continual updates over the entire domain move. Voicing the progress helped me a lot.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.8 added ad link

Filed Under: Advice, DNS, MySQL, OS, Server, Ubuntu, VM, Vultr, Website, Wordpress Tagged With: C Name, DNS, gmail, mx, server, ubuntu, vm, VPS, Vulty

Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute

July 29, 2017 by Simon

I visited https://letsencrypt.org/ where it said Let’s Encrypt is a free, automated, and open SSL Certificate Authority. That sounds great, time to check them out. This may not take 1 minute on your server but it did on mine (a self-managed Ubuntu 16.04/NGINX server). If you are not sure why you need an SSL cert read Life Is About to Get a Whole Lot Harder for Websites Without Https from Troy hunt.

FYI you can set up an Ubuntu Vutur VM here (my guide here) for as low as $2.5 a month or a Digital Ocean VM server here (my guide here) for $5 a month, billing is charged to the hour and is cheap as chips.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

But for the best performing server read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here). Also read my recent post on setting up Lets Encrypt on sub domains.

I clicked Get Started and read the Getting started guide. I was redirected to https://certbot.eff.org/ where it said: “Automatically enable HTTPS on your website with EFF’s Certbot, deploying Let’s Encrypt certificates.“. I was asked what web server and OS I use..

I confirmed my Linux version

lsb_release -a

Ensure your NGINX is setup (read my Vultr guide here) and you have a”server_name” specified in the “/etc/nginx/sites-available/default” file.

e.g

server_name yourdomain.com www.yourdomain.com;

I also like to set “root” to “/www” in the NGINX configuration.

e.g

root /www;

Tip: Ensure the www folder is set up first and has ownership.

mkdir /www
sudo chown -R www-data:www-data /www

Also, make and verify the contents of a /www /index.html file.

echo "Hello World..." > /www/index.html && cat /www/index.html

I then selected my environment on the site (NGINX and Ubuntu 16.04) and was redirected to the setup instructions.

FYI: I will remove mention of my real domain and substitute with thesubdomain.thedomain.com for security in the output below.

I was asked to run these commands

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx

Detailed instructions here.

Obtaining an SSL Certificate

I then ran the following command to automatically obtain and install (configure NGINX) an SSL certificate.

sudo certbot --nginx

Output

sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel):Invalid email address: .
Enter email address (used for urgent renewal and security notices)  If you
really want to skip this, you can run the client with
--register-unsafely-without-email but make sure you then backup your account key
from /etc/letsencrypt/accounts   (Enter 'c' to cancel): [email protected]

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf. You must agree
in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: A

-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
-------------------------------------------------------------------------------
(Y)es/(N)o: Y

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: thesubdomain.thedomain.com
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):1
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for thesubdomain.thedomain.com
Waiting for verification...
Cleaning up challenges
Deployed Certificate to VirtualHost /etc/nginx/sites-enabled/default for set(['thesubdomain.thedomain.com', 'localhost'])
Please choose whether HTTPS access is required or optional.
-------------------------------------------------------------------------------
1: Easy - Allow both HTTP and HTTPS access to these sites
2: Secure - Make all requests redirect to secure HTTPS access
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/default

-------------------------------------------------------------------------------
Congratulations! You have successfully enabled https://thesubdomain.thedomain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=thesubdomain.thedomain.com
-------------------------------------------------------------------------------

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem. Your cert will expire on 2017-10-27. To obtain a new or tweaked version
   of this certificate in the future, simply run certbot again with
   the "certonly" option. To non-interactively renew *all* of your
   certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

That was the easiest SSL cert generation in history.

SSL Certificate Renewal (dry run)

sudo certbot renew --dry-run

Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/thesubdomain.thedomain.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for thesubdomain.thedomain.com
Waiting for verification...
Cleaning up challenges

-------------------------------------------------------------------------------
new certificate deployed with reload of nginx server; fullchain is
/etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem
-------------------------------------------------------------------------------
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)

IMPORTANT NOTES:
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.

SSL Certificate Renewal (Live)

certbot renew

The Lets Encrypt SSL certificate is only a 90-day certificate.

Again: The Lets Encrypt SSL certificate is only a 90-day certificate.

I’ll run “certbot renew” again 2 months time to manually renew the certificate (and configure my higher security configuration (see below)).

Certbot NGINX Config renew (what did it do)

It’s nice to see forces HTTPS added to the configuration

if ($scheme != "https") {
   return 301 https://$host$request_uri;
} # managed by Certbot

Cert stuff added

    listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/thesubdomain.thedomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

Contents of /etc/letsencrypt/options-ssl-nginx.conf

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";

This contains too many legacy cyphers for my liking.

I changed /etc/letsencrypt/options-ssl-nginx.conf to tighten ciphers and add TLS 1.3 (as my NGINX Supports it).

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

Enabling OCSP Stapling and Strict Transport Security in NGINX

I add the following to /etc/nginx/sites/available/default

# OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Restart NGINX.

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

SSL Labs SSL Score

I am happy with this.

Read my guide on Beyond SSL with Content Security Policy, Public Key Pinning etc

Automatic SSL Certificate Renewal

There are ways to auto renew the SSL certs floating around YouTube but I’ll stick to manual issue and renewals of SSL certificates.

SSL Checker Reports

‘I checked the certificate with other SSL checking sites.

NameCheap SSL Checker – https://decoder.link/sslchecker/ (Passed). I did notice that the certificate will expire in 89 days (I was not aware of that). I guess a free 90-day certificate for a noncritical server is OK (as long as I renew it in time).

CertLogik – https://certlogik.com/ssl-checker/ (OK)

Comodo – https://sslanalyzer.comodoca.com (OK)

Lets Encrypt SSL Certificate Pros

  • Free.
  • Secure.
  • Easy to install.
  • Easy to renew.
  • Good for local, test or development environments.
  • It auto-detected my domain name (even a subdomain)

Lets Encrypt SSL Certificate Cons

  • The auto install process does not setup OCSP Stapling (I configured NGINX but the certificate does not support it may be to limit the Certificate Authority resources handing the certificate revocation checks).
  • The auto install process does not setup HSTS. (I enabled it in NGINX manually).
  • The auto install process does not setup HPKP. More on enabling Public Key Pinning in NGINX here.
  • Too many cyphers installed by default.
  • No TLS 1.3 installed by default by the install process in my NGINX config in the default certbot secure auto install (even though my NGINX supports it). More on enabling TLS 1.3 in NGINX here.

Read my guide on Beyond SSL with Content Security Policy, Public Key Pinning etc

I’d recommend you follow these Twitter security users

http://twitter.com/GibsonResearch

https://twitter.com/troyhunt

https://twitter.com/0xDUDE

Troubleshooting

I had one server were certbot failed to verify the SSL and said I needed a public routable IP (it was) and that the firewall needed to be disabled (it was). I checked the contents of “/etc/nginx/sites-available/default” and it appeared no additional SSL values were added (not even listening on port 443?????).

Certbot Error

I am viewing: /var/log/letsencrypt/letsencrypt.log

Forcing Certificate Renewal 

Run the following command to force a certificate to renew outside the crontab renewal window.

certbot renew --force-renew

Conclusion

Free is free but I’d still use paid certs from Namecheap for important stuff/sites, not having OCSP stapling on the CA and 90-day certs is a deal breaker for me. The Lets Encrypt certificate is only a 90-day certificate (I’d prefer a 3-year certificate).

A big thank you to Electronic Frontier Foundation for making this possible and providing a free service (please donate to them)..

Lets Encrypt does recommend you renew certs every 60 days or use auto-renew tools but rate limits are in force and Lets Encrypt admit their service is young (will they stick around)? Even Symantec SSL certs are at risk.

Happy SSL’ing.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

fyi, I followed this guide setting up Let’s Encrypt on Ubuntu 18.04.

Read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.8 Force Renew Command

v1.7 Ubuntu 18.94 info

V1.62 added hardening Linux server link

Filed Under: AWS, Cloud, Cost, Digital Ocean, LetsEncrypt, ssl, Ubuntu, VM, Vultr Tagged With: free, lets encrypt, ssl certificate

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

Setting up a Vultr VM and configuring it with Runcloud.io

July 27, 2017 by Simon

I have setup a Vultur VM manually (guide here). I decided to setup a VM software stack with Runcloud to see if it is as secure as fast (and saves me time).

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I deployed an Ubuntu server in NY for $2.5 a month on Vultr (I chose NY over Sydney as the Sydney $2.5 servers were sold out)

I then signed up for Runcloud.

I opened up port 80, 443 and 34210 as recommended by runcloud.

I then connected to my Vultr server with Runcloud.

Then Runcloud asked that I run a  script on the server as root.

Tip: Don’t run the script on a different IP like I did.

It appears I accidentally ran the Runcloud install command on the wrong IP, what did it install/change? I looked to see if Runcloud offer an uninstallation command? nope.

Snip from runcloud.

Deleting Your Server

Deleting your server from RunCloud is permanent. You can’t reconnect it back after you have deleted your server. The only way to reconnect is to format your server and run the RunCloud’s installation again.

To completely uninstall RunCloud, you must reformat your server. We don’t have any uninstallation script (yet).

No uninstall?

Time to check out RunCLoud IDE at https://manage.runcloud.io to see what it looks like.

View Server Configuration

I was able to start a  NGINX installation/web server within a few clicks (it installed to /etc/nginx–rc)

Runcloud.io Pros

  • Setup was quick.
  • Dashboard looks pretty

Runcloud.io Cons

  • My root access is no longer working (what happened)? I did notice that Fail2Ban was blocking loads of IP’s?

  • I can’t seem to edit website files with runcloud.io?
  • Limited by UI (I could create a database and database user but now set database user permissions or assign database users to databases (there is a guide but no GUI on adding a  user to a DB))
  • I have to trust that runcloud have set things up securely.
  • CPanel UI has more options that Runcloud IMHO.
  • Other free domain managers exist like https://www.phpservermonitor.org

  • RunCoud.io RTFM? Seriously (F stands for, are customers really that bad)? https://runcloud.io/rtfm/

Domain

I linked a domain to the IP, now I just need to wait before continuing.

End of Guide

I have been locked out of my runcloud managed domain, I will stick to manually setup servers.

Being locked out is good for security I guess 🙁

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.4 added tags and fixed typos.

v0.31 Domain linked

Filed Under: Cloud, Runcloud, Ubuntu Tagged With: digital ocean, MySQL, runcloud, UpCloud, vultr

How to upgrade an AWS free tier EC2 t2.micro instance to an on demand t2.medium server

July 9, 2017 by Simon

Amazon Web Services have a great free tier where you can develop with very little cost. The free tier Linux server is a t2.micro server (1 CPU, low to moderate IO, 1GB memory with 750 hours or CPU usage). The free tier limits are listed here. Before you upgrade can you optimize or cache content to limit usage?

When you are ready to upgrade resources you can use this cost calculator to set your region (I am in the Sydney region) and estimate your new costs.

awsupgrade001

You can also check out the ec2instances.info website for regional prices.

Current Server Usage

I have an NGINX Server with a NodeJS back powering an API that talks to a local MySQL database and Remote MongoDB Cluster ( on AWS via http://www.mongodb.com/cloud/ ). The MongoDB cluster was going to cost about $120 a month (too high for testing an app before launch).  The Free tier AWS instance is running below the 20% usage limit so this costs nothing (on the AWS free tier).

You can monitor your instance usage and credit in the Amazon Management Console, keep an eye on the CPU usage and CPU Credit and CPU Credit Balance.  If the CPU usage grows and balance drops you may need to upgrade your server to prevent usage charges.

AWS Upgrade

I prefer the optional htop command in Ubuntu to keep track of memory and processes usage.

Basic information from AWS t1.micro (idle)

AWS Upgrade

Older htop screenshot of a dual CPU VM being stressed.

CPU Busy

Future Requirements

I plan on moving a non-cluster MongoDB database onto an upgraded AWS free tier instance and develop and test there and when and if a scalable cluster is needed I can then move the database back to http://www.mongodb.com/cloud/.

First I will need to upgrade my Free tier EC2 instance to be able to install MongoDB 3.2 and power more local processing. Upgrading will also give me more processing power to run local scripts (instead of hosting them in Singapore on Digital Ocean).

How to upgrade a t2.micro to t2.medium

The t2.medium server has 2 CPU’s, 4 GB of memory and Low to Moderate IO.

  1. Backup your server in AWS (and manual backup).
  2. Shutdown the server with the command sudo shutdown now
  3. Login to your AWS Management Console and click Instances (note: Your instance will say it is running but it has shut down (test with SSH and connection will be refused)).
  4. Click Actions, Image then Create Image
  5. Name the image (select any volumes you have created) and click Create Image.
  6. You can follow the progress o the snapshot creation under the Snapshots menu.AWS Upgrade
  7. When the volumes have been snapshoted you can stop the instance.
  8. Now you can change the instance type by right clicking on the stopped instance and selecting Instance Settings then Change Instance Type  AWS Upgrade
  9. Choose the new desired instance type and click apply (double check your cost estimates) AWS Upgrade
  10. Your can now start your instance.
  11. After a few minutes, you can log in to your server and confirm the cores/memory have increased.AWS Upgrade
  12. Note: Your servers CPU credits will be reset and your server will start with lower CPU credits. Depending on your server you will receive credits every hour up to your maximum cap.AWS Upgrade
  13. Note: Don’t forget to configure your software to use more RAM or Cores if required.

Misc

Confirm processors via the command line:

grep ^processor /proc/cpuinfo | wc -l 
>2

My NGINX configuration changes.

worker_processes 2;
worker_cpu_affinity auto;
...
events {
	worker_connections 4096;
	...
}
...

Find the open file limit of your system (for NGINX worker_connections maximum)

ulimit -n

That’s all for now.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.0 DRAFT Initial Post

Filed Under: Advice, AWS, Cloud, Computer, Server, Ubuntu, VM Tagged With: AWS, instance, upgrade

Ubuntu Desktop OS for Developers

June 25, 2017 by Simon

Did you know you can download and install a free operating system (free Windows Alternative) from https://www.ubuntu.com/ and use it on your own computer or as a virtual machine?

Ubuntu is a common operating system on cloud providers AWS or Digital Ocean so cloud server installation so installing it locally is a good idea if you are a developer.

Go to https://www.ubuntu.com/ and click Desktop for Developers menu item.

Then click the Download Button next to Ubuntu 16.04.2 LTS.

Choose your donation amount (set nothing if you have donated before or cannot afford it).

Click the take me to the download link.

Wait for the download to start or click download now.

The download is  1.4Gb in size and may take a while. The file format is an ISO format (an ISO is a copy of a CD, burn it with your favourite CD-Burning package).  Burnt ISO CD’s are bootable.

You can either boot and install Ubuntu alongside your existing operating system in a  virtual environment on Mac OS with Parallels or VirtualBox on Windows. Warning you accidentally can delete your existing operating system and files if you are not sure that you are doing.

I decided to run Ubuntu on my Mac inside Parallels as a virtual machine (this used 5GB space and 1GB memory and 2x CPU’s).

Once I setup Ubuntu it booted up and I was presented with a login screen.

I had a link to a FileManager and Control panel on the left. Help for Ubuntu can he found here https://help.ubuntu.com/stable/ubuntu-help/

The Ubuntu desktop has a Word Processor, Spreadsheet and Presentation package.

Installing NodeJS and other development software (Skip if you are not a  developer).

I installed nodeJS by following the instructions here

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs

You can test the development tools by typing

python --version
perl --version
nodejs -v

You can install other development software (NGINX, MySQL etc) by reading my guide here.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

(adsbygoogle = window.adsbygoogle || []).push({});

Version 1.0 Instal Blog Post

Filed Under: Free, OS, Ubuntu Tagged With: alternative, free, windows

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) IoT (9) LetsEncrypt (7) Linux (21) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) Performance (6) PHP (13) Scalability (12) Scalable (14) Security (45) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (45) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT