• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

ssl

Adding two sub domains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

June 27, 2018 by Simon

Here is how I added two subdomains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

UpCloud performance is great.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Goal(s)

Setup 2x subdomains on https://fearby.com

– Sub Domain #1: https://test.fearby.com (pointing to a dedicated UpCloud VM in Singapore for testing).

– Sub Domain #2: https://audit.fearby.com (pointing to a sub-website on the NGINX/VM that runs https://fearby.com )

Let’s set up the first Sub Domain (dedicated VM) and SSL

Backup

Do back up your server first.

VM

I created a second server ($5 month or $0.07c hour 1,024MB Memory, 25GB Disk, 1024 GB Month Data Transfer) at UpCloud. If you don’t already have an account at UpCloud use this link to signup and get $25 free credit ( https://www.upcloud.com/register/?promo=D84793 ). Read my blog post on why UpCloud is awesome and how I moved my domain to UpCloud.

Once I spun up a second server I obtained the IPv4 and IPv6 IP addresses of the new “test” VM from the UpCloud dashboard.

IPV4 IP: 94.237.65.54
IPV6 IP: 2a04:3543:1000:2310:24b7:7cff:fe92:468c

DNS

These DNS records were already in place with my DNS provider (Cloudflare).

A fearby.com 209.50.48.88
AAAA fearby.com 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added these DNS records for the subdomains.

I added a new A NAME record for the new shared NGINX subdomain (for https://audit.fearby.com), this subdomain will be a sub-website that is running off the same server as https://fearby.com

A audit 209.50.48.88
AAAA audit 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added another set of records for the new dedicated VM  subdomain (for https://test.fearby.com)

A test 94.237.65.54
AAAA test 2a04:3543:1000:2310:24b7:7cff:fe92:468c

I waited for DNS to replicate around the globe by watching https://www.whatsmydns.net/

Setup a Firewall

On the new dedicated https://test.fearby.com VM, I installed the ufw firewall.

sudo apt-get install ufw

I configured the firewall to allow minimum ports (and added whitelisted IP for port 22 and added UpCloud DNS servers). I will lock this down some more later.

TIP: If your ISP does not offer a dedicated IP try a VPN. I use https://cyberghostvpn.com on OSX and Android.

Firewall rules.

sudo ufw status numbered

     To                         Action      From
     --                         ------      ----
[ 1] 22                         ALLOW IN    x.x.x.x
[ 2] 80                         ALLOW IN    Anywhere
[ 3] 443                        ALLOW IN    Anywhere
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 25                         DENY IN     Anywhere
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 53                         ALLOW IN    2a04:3540:53::1
[10] 53                         ALLOW IN    2a04:3544:53::1
[11] 22                         ALLOW IN    x.x.x.x.x.x.x.x.x
[12] 25 (v6)                    DENY IN     Anywhere (v6)

I enabled the firewall.

sudo ufw enable

Install NGINX (on https://test.fearby.com)

On the new dedicated https://test.fearby.com VM I…

Created a new www root

mkdir /www-root

Set permissions

sudo chown -R www-data:www-data /www-root

Installed NGINX

sudo apt-get update
sudo apt-get install nginx

I created a placeholder webpage

sudo nano /www-root/index.html

Configured the root value in /etc/nginx/sites-available/default

Created a symbolic link of the nginx config

sudo ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default

Lets Encrypt SSL

I have previously setup Lets encrypt on Ubuntu 16.04 but not 18.04. Certbot had info on setting up Lets Encrypt for 14.x 16.x and 17.x but not 18.x

Full credit for the SSL steps goes to @Linuxize ( tips on setting up Lets Encrypt on Ubuntu 18.04 ). Check out https://linuxize.com/

I installed Lets Encrypt certbot

sudo apt update
sudo apt install certbot

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Map requests to http://test.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

Create a /etc/nginx/snippets/letsencrypt.conf on http://test.fearby.com and enforce the redirect.

location ^~ /.well-known/acme-challenge/ {
  allow all;
  root /var/lib/letsencrypt/;
  default_type "text/plain";
  try_files $uri =404;
}

Create a /etc/nginx/snippets/ssl.conf file on http://test.fearby.com

ssl_dhparam /etc/ssl/certs/dhparam.pem;

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 30s;

add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;

Let’s get a certificate

sudo certbot certonly --agree-tos --email yo[email protected] --webroot -w /var/lib/letsencrypt/ -d test.fearby.com

Certificates have been created 🙂

ls -al /etc/letsencrypt/live/test.fearby.com/
total 12
drwxr-xr-x 2 user user 4096 Jun 26 11:30 .
drwx------ 3 user user 4096 Jun 26 11:30 ..
-rw-r--r-- 1 user user  543 Jun 26 11:30 README
lrwxrwxrwx 1 user user   39 Jun 26 11:30 cert.pem -> ../../archive/test.fearby.com/cert1.pem
lrwxrwxrwx 1 user user   40 Jun 26 11:30 chain.pem -> ../../archive/test.fearby.com/chain1.pem
lrwxrwxrwx 1 user user   44 Jun 26 11:30 fullchain.pem -> ../../archive/test.fearby.com/fullchain1.pem
lrwxrwxrwx 1 user user   42 Jun 26 11:30 privkey.pem -> ../../archive/test.fearby.com/privkey1.pem

Now lets edit “/etc/nginx/sites-available/default” on https://test.fearby.com VM and add the cert paths.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        ssl_certificate /etc/letsencrypt/live/test.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/test.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/test.fearby.com/chain.pem;

        include snippets/ssl.conf;

        #ssl_stapling on; # Requires nginx >= 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        root /www-root/;

        include snippets/letsencrypt.conf;

        index index.html;

        server_name test.fearby.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

Now let’s setup the second subdomain (subsite off https://fearby.com) and SSL

VM

I already have NGINX on https://fearby.com set up a second site.

DNS

We have already set up a DNS record for https://audit.fearby.com (above)

Firewall

Already configured at https://fearby.com

SSL

Because I had an existing Comodo certificate on https://fearby.com I am going to repeat the steps above to generate a new certificate but save the NGINX config to /etc/nginx/sites-available/audit.fearby.com (this activates the second site)

TIP: Follow the Linuxize guide here (for creating ssl.conf, letsencrypt.conf etc config files), Do a backup and restore if need be.

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d audit.fearby.com

Configure NGINX

Map requests to http://audit.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

I created a new NGINX site ( /etc/nginx/sites-available/audit.fearby.com )

#proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-audit-root;

        # Listen Ports
        listen 80;
        listen [::]:80;
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name audit.fearby.com;

        include snippets/letsencrypt.conf;

        location / {
                try_files $uri $uri/ =404;
        }

        ssl_certificate /etc/letsencrypt/live/audit.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/audit.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/audit.fearby.com/chain.pem;

        ssl_dhparam /etc/ssl/certs/auditdhparam.pem;

        ssl_session_timeout 1d;
        #ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA38$

        ssl_prefer_server_ciphers on;

        ssl_stapling on;
        ssl_stapling_verify on;

        #resolver 8.8.8.8 8.8.4.4 valid=300s;
        #resolver_timeout 30s;

        add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }
}

I created a symbolic link of the config file

sudo ln -s /etc/nginx/sites-available/audit.fearby.com /etc/nginx/sites-enabled/audit.fearby.com

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

How to test the certificate renewal

sudo certbot renew --dry-run

Automate the renewal in crontab (every 12 hours)

I set this crontab entry up on https://fearby.com and https://test.fearby.com

crontab -e
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

Conclusion

Yes, I haVe 2 subdomains (1x dedicated VM and the other is a sub-website off an existing server) with SSL certificates.

Ping Results

ping -c 4 fearby.com
PING fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=220.000 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=290.602 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=311.938 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=330.841 ms

--- fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 220.000/288.345/330.841/41.948 ms

ping -c 4 test.fearby.com
PING test.fearby.com (94.237.65.54): 56 data bytes
64 bytes from 94.237.65.54: icmp_seq=0 ttl=44 time=333.590 ms
64 bytes from 94.237.65.54: icmp_seq=1 ttl=44 time=252.433 ms
64 bytes from 94.237.65.54: icmp_seq=2 ttl=44 time=271.153 ms
64 bytes from 94.237.65.54: icmp_seq=3 ttl=44 time=292.685 ms

--- test.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 252.433/287.465/333.590/30.200 ms

ping -c 4 audit.fearby.com
PING audit.fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=281.662 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=307.676 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=227.985 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=215.566 ms

--- audit.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 215.566/258.222/307.676/37.845 ms

Webpage Results

Screenshow showing the main site and 2 subdomains in a web browser

Troubleshooting

If you are having troubles generating the initial certificate check that you have not blocked port 80 and don’t have “Strict-Transport-Security” heavers enabled.

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d g
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for yoursubdomain.domain.com
Using the webroot path /var/lib/letsencrypt for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. yoursubdomain.domain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficzLlmg_w6Tc: q%!(EXTRA string=<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>)

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: yoursubdomain.domain.com
   Type:   unauthorized
   Detail: Invalid response from
   http://yoursubdomain.domain.com/.well-known/acme-challenge/_QA3jblEydx5mE8I8OdRsd2EdHIj4R-przLlmg_w6Tc:
   q%!(EXTRA string=<html>
   <head><title>404 Not Found</title></head>
   <body bgcolor="white">
   <center><h1>404 Not Found</h1></center>
   <hr><center>)

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A record(s) for that domain
   contain(s) the right IP address.

I re-ran the certbot command but pointed to the real /www-root (not/var/lib/letsencrypt/)

Create a new

mkdir /www-root/.well-known/
mkdir /www-root/.well-known/acme-challenge/
sudo certbot certonly --agree-tos --email [email protected] --webroot -w /www-root -d yoursubdomain.domain.com

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.1 Troubleshooting

v1.0 Initial Post

Filed Under: Linux, NGINX, ssl, Subdomain, Ubuntu, UpCloud, VM, Website Tagged With: a, Adding, an, and, domains, new, nginx, on, one, other, pointing, sub, subsite, the, to, two, Ubuntu 18.04, UpCloud, vm

Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare

April 5, 2018 by Simon

This guide will show you how to enable the latest Transport Layer Security (TLS) 1.3 protocol with it’s predecessor Secure Sockets Layer (SSL) with NGINX and OpenSSL for better website security on an Ubuntu 16.04 server

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. Making sure your server is up to date and running the latest SSL software is important. I have updated Open SSL before and blogged about this here.  Do back up your server before changing settings and if you use  Cloudflare (if you don’t do it now) enable Development Mode (and disable caching until changes are made).

For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

TLS 1.3 is the latest SSL security protocol that can be used between clients and servers to encrypt connections on the web.

TLS 1.3 uptake is only 60% according to https://caniuse.com/#search=TLS%201.3

TLS 1.3

Read why TLS 1.3 is important and news on TLS 1.3 can be found here: https://www.openssl.org/blog/blog/2018/02/08/tlsv1.3/

The Good and Bad

Done be like this commercial site with very poor security (tested with SSL labs and asafaweb)

Bad SSL

Here is what the top 1 million sites do

Here it is!! Alexa Top 1 Million Analysis – February 2018 https://t.co/TjBHNX7zTi

— Scott Helme (@Scott_Helme) February 26, 2018

Installing Open SSL on Ubuntu

Connect to your Ubuntu 16.04 server via SSH (I connected to my Vultr server)

Check what version of OpenSSL you have? My OpenSSL is out of date.

# openssl version
OpenSSL 1.1.0g  2 Nov 2017

Tip: What Ciphers does your Open SSL Support?

openssl ciphers -s -v
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH     Au=RSA  Enc=CHACHA20/POLY1305(256) Mac=AEAD
DHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=DH       Au=RSA  Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA384
ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA384
DHE-RSA-AES256-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA256
ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA256
ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA256
DHE-RSA-AES128-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA256
ECDHE-ECDSA-AES256-SHA  TLSv1 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA1
ECDHE-RSA-AES256-SHA    TLSv1 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
ECDHE-ECDSA-AES128-SHA  TLSv1 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA1
ECDHE-RSA-AES128-SHA    TLSv1 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1
AES256-GCM-SHA384       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(256) Mac=AEAD
AES128-GCM-SHA256       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(128) Mac=AEAD
AES256-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA256
AES128-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA256
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1

Time to update Open SSL

OpenSSL 1.1.1 beta is available and supports TLS 1.3  but it is n BETA form.  OpenSSL code is available here.

I did the following to download and build the latest version of OpenSSL.

mkdir /openssltemp
cd /openssltemp
sudo git clone git://git.openssl.org/openssl.git
cd openssl/
./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl -Wl,-rpath,/usr/local/ssl/lib
make
sudo make install

I tried to check the open SSL version but had an error?

openssl version 
openssl: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: version `OPENSSL_1_1_1' not found (required by openssl)
openssl: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: version `OPENSSL_1_1_1' not found (required by openssl)

A quick GitHub ticket revealed I needed to set a path variable.

export LD_LIBRARY_PATH=/usr/local/lib
echo "export LD_LIBRARY_PATH=/usr/local/bin/openssl" >> ~/.bashrc

Open SSL now reports it’s version.

openssl version
OpenSSL 1.1.1-pre3 (beta) 20 Mar 2018

What version NGINX do you have (1.13 supports TLS 1.3) read here

# nginx -v
nginx version: nginx/1.13.9

Backup your NGINX

Do backup your server files and take a snapshot if need be.  I am not responsible;e for a broken server,

sudo cp -R /etc/nginx/ /nginx-backup-26thMar-2018

Edit NGINX Configuration

Update NGINX configuration: /etc/nginx/sites-available/default

ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ecdh_curve secp384r1;

tip: Review other NGINX hardening settings here.  Also remove TLSv1.0

I tested my NGINX config loaded them and restarted NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Check the status of NGINX

# /etc/init.d/nginx status

[ ok ] Restarting nginx (via systemctl): nginx.service.
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) 
     Docs: man:nginx(8)
  Process: 15154 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
  Process: 15162 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
  Process: 15159 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
 Main PID: 15166 (nginx)
    Tasks: 4
   Memory: 2.3M
      CPU: 27ms
   CGroup: /system.slice/nginx.service
           ├─15166 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           ├─15170 nginx: worker process
           ├─15171 nginx: cache manager process
           └─15172 nginx: cache loader process

If you have configured Cloudflare then log in and enable TLS support.

Cloudflare TLS Settings

Enable TLS 1.3 in Chrome by visiting chrome://flags/#tls13-variant This should be automatic in later versions of Chrome and other browsers.

Enable TLS in Chrome

Verify TLS

I used the developer tools in Chrome to confirm the page was verified in TLS 1.3.

Verify TLS

Updated to 1.1.1-pre6-dev

mkdir /temp
cd /temp
sudo git clone https://github.com/openssl/openssl.git
cd openssl/
./config --prefix=/usr/local --openssldir=/usr/local -Wl,-rpath,/usr/local
make
sudo make install
openssl
OpenSSL> version
OpenSSL 1.1.1-pre6-dev  xx XXX xxxx
OpenSSL> exit

Don’t forget to test your SSL strength with https://dev.ssllabs.com/ssltest/

SSL Test 2018

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.4 fixed typo

v1.3 added bad ssl cert.

v1.2 ssl test v1.1 updated to 1.1.1-pre6-dev

v1.0 Initial post

Filed Under: ssl Tagged With: 16.04, a, an, Cloudflare, Enabling, is, nginx, on, server, ssl, that, TLS 1.3, ubuntu, Using, website

Using OWASP ZAP GUI to scan your Applications for security issues

March 17, 2018 by Simon

OWASP is a non-profit that lists the Top Ten Most Critical Web Application Security Risks, they also have a GUI Java tool called OWASP Zap that you can use to check your apps for security issue.

I have a number of guides on moving hosting away form CPanel , Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. It is important that you always update your site and software and test your sites and software for vulnerabilities. Zap is free and completely open source.

Disclaimer, I am not an expert (this Zap post and my past Kali Linux guide will be updated as I learn more).

OWASP Top 10

OWASP has a top 10 list of things to review.

OWASP Top 10

Download the OWASP 10 10 Application security risks PDF here form here.

Using the free OWASP Zap Tool

Snip from https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project

“The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. It’s also a great tool for experienced pentesters to use for manual security testing.”

Zap Overview

Here is a quick demo of Zap in action.

Do check out the official Zap videos on youtube: https://www.youtube.com/user/OWASPGLOBAL/videos if you want to learn more.

Installing Zap

Download Zap from here.

Download Zap

Download Options

Download

Download contents

Run Install

Copy to the app to the OSX Application folder

Installing

App Installed

App Insatalled

Open OSX’s Privacy and Security screen and click Open Anyway

Open Anwway

OWASP Zap is now Installed

Insallled

Ready for a Scan

Blind Scan

But before we do let’s check out the Options

Options

OWASP Zap allows you to label reports to ad from anyone you want.

Report Label Options

Now let’s update the program and plugins, Click Manage Add-ons

Manage Adons

Click Update All to Update addons

Updates

I clicked Update All

Plugins

Installed some plugins

Marketplace

Zap is Ready

Zap

Add a site and right click on the site and you can perform an active scan or port scan.

Right click Zap

First Scan (https failed)

https failed

I enabled unsafe SSL/TLS Renegotiation.

Allow Unsafe HTTPS

This did not work and this guide said I needed to install the “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files” from here.

Cryptography Files OSX

The extract files to /Library/Java/JavaVirtualMachines/%your_jdk%/Contents/Home/jre/lib/security

Extract

I restarted OWASP Zap and tried to scan my site buy it appears Cloudflare (that I recently set up) was blocking my scans and reported error 403. I decided to scan another site of mine that was not on Cloudflare but had the same Lets Encrypt style SSL cert.

fyi: I own and set up the site I queried below.

Zap Results

OWASP Zap scan performed over 800 requests and tried traversal exploits and many other checks. Do repair any major failures you find.

Zan Scan

Generating a Report

To generate a report click Report then the appropriate generation menu of choice.

Generate Report

FYI: The High Priority Alert is a false positive with an HTML item being mistaken for a CC number.

I hope this guide helps someone. Happy software/server hardening and good luck.

More Reading

Check out my Kali Linux guide.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.3 fixed hasting typo.

v1.2 False Positive

v1.1 updated main features

v1.0 Initial post

Filed Under: Cloud, Cloudflare, Code, DNS, Exploit, Firewall, LetsEncrypt, MySQL, owasp, Secure, Security, ssl, Ubuntu Tagged With: Applications, for, gui, issues, OWASP, scan, security, to, Using, your, ZAP

Run an Ubuntu VM system audit with Lynis

September 11, 2017 by Simon

Following on from my Securing Ubuntu in the cloud blog post I have installed Lynis open source security audit tool to check out to the security of my server in the cloud.

Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defences of their Linux and Unix-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners. https://cisofy.com/lynis and https://github.com/CISOfy/lynis.

It is easy to setup a server in the cloud (create a server on Vultr or Digital Ocean here). Guides on setting up servers exist ( setup up a Vultr VM and configure it and digital ocean server) but how about securing it? You can install a LetsEncrypt SSL certificate in minutes or setup Content Security Policy and Public Key Pinning but don’t forget to get an external in-depth review of the security of your server(s).

Lynis Security Auditing Tool

Preparing install location (for Lynis)

cd /
mkdir utils
cd utils/

Install Lynis

sudo git clone https://www.github.com/CISOfy/lynis
Cloning into 'lynis'...
remote: Counting objects: 8357, done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 8357 (delta 28), reused 42 (delta 17), pack-reused 8295
Receiving objects: 100% (8357/8357), 3.94 MiB | 967.00 KiB/s, done.
Resolving deltas: 100% (6121/6121), done.
Checking connectivity... done.

Running a Lynus system scan

./lynis audit system -Q

Lynis Results 1/3 Output (removed sensitive output)

[ Lynis 2.5.5 ]

################################################################################
  Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
  welcome to redistribute it under the terms of the GNU General Public License.
  See the LICENSE file for details about using this software.

  2007-2017, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)
################################################################################


[+] Initializing program
------------------------------------
- Detecting OS...  [ DONE ]
- Checking profiles... [ DONE ]

  ---------------------------------------------------
  Program version:           2.5.5
  Operating system:          Linux
  Operating system name:     Ubuntu Linux
  Operating system version:  16.04
  Kernel version:            4.4.0
  Hardware platform:         x86_64
  Hostname:                  yourservername
  ---------------------------------------------------
  Profiles:                  /linis/lynis/default.prf
  Log file:                  /var/log/lynis.log
  Report file:               /var/log/lynis-report.dat
  Report version:            1.0
  Plugin directory:          ./plugins
  ---------------------------------------------------
  Auditor:                   [Not Specified]
  Test category:             all
  Test group:                all
  ---------------------------------------------------
- Program update status...  [ NO UPDATE ]

[+] System Tools
------------------------------------
- Scanning available tools...
- Checking system binaries...

[+] Plugins (phase 1)
------------------------------------
: plugins have more extensive tests and may take several minutes to complete - Plugin pam
    [..]
- Plugin systemd
    [................]

[+] Boot and services
------------------------------------
- Service Manager [ systemd ]
- Checking UEFI boot [ DISABLED ]
- Checking presence GRUB [ OK ]
- Checking presence GRUB2 [ FOUND ]
- Checking for password protection [ OK ]
- Check running services (systemctl) [ DONE ]
: found 24 running services
- Check enabled services at boot (systemctl) [ DONE ]
: found 30 enabled services
- Check startup files (permissions) [ OK ]

[+] Kernel
------------------------------------
- Checking default run level [ RUNLEVEL 5 ]
- Checking CPU support (NX/PAE)
 support: PAE and/or NoeXecute supported [ FOUND ]
- Checking kernel version and release [ DONE ]
- Checking kernel type [ DONE ]
- Checking loaded kernel modules [ DONE ]
active modules
- Checking Linux kernel configuration file [ FOUND ]
- Checking default I/O kernel scheduler [ FOUND ]
- Checking for available kernel update [ OK ]
- Checking core dumps configuration [ DISABLED ]
- Checking setuid core dumps configuration [ PROTECTED ]
- Check if reboot is needed [ NO ]

[+] Memory and Processes
------------------------------------
- Checking /proc/meminfo [ FOUND ]
- Searching for dead/zombie processes [ OK ]
- Searching for IO waiting processes [ OK ]

[+] Users, Groups and Authentication
------------------------------------
- Administrator accounts [ OK ]
- Unique UIDs [ OK ]
- Consistency of group files (grpck) [ OK ]
- Unique group IDs [ OK ]
- Unique group names [ OK ]
- Password file consistency [ OK ]
- Query system users (non daemons) [ DONE ]
- NIS+ authentication support [ NOT ENABLED ]
- NIS authentication support [ NOT ENABLED ]
- sudoers file [ FOUND ]
- Check sudoers file permissions [ OK ]
- PAM password strength tools [ OK ]
- PAM configuration files (pam.conf) [ FOUND ]
- PAM configuration files (pam.d) [ FOUND ]
- PAM modules [ FOUND ]
- LDAP module in PAM [ NOT FOUND ]
- Accounts without expire date [ OK ]
- Accounts without password [ OK ]
- Checking user password aging (minimum) [ DISABLED ]
- User password aging (maximum) [ DISABLED ]
- Checking expired passwords [ OK ]
- Checking Linux single user mode authentication [ OK ]
- Determining default umask
- umask (/etc/profile) [ NOT FOUND ]
- umask (/etc/login.defs) [ SUGGESTION ]
- umask (/etc/init.d/rc) [ SUGGESTION ]
- LDAP authentication support [ NOT ENABLED ]
- Logging failed login attempts [ ENABLED ]

[+] Shells
------------------------------------
- Checking shells from /etc/shells
: found 6 shells (valid shells: 6).
- Session timeout settings/tools [ NONE ]
- Checking default umask values
- Checking default umask in /etc/bash.bashrc [ NONE ]
- Checking default umask in /etc/profile [ NONE ]

[+] File systems
------------------------------------
- Checking mount points
- Checking /home mount point [ SUGGESTION ]
- Checking /tmp mount point [ SUGGESTION ]
- Checking /var mount point [ SUGGESTION ]
- Query swap partitions (fstab) [ NONE ]
- Testing swap partitions [ OK ]
- Testing /proc mount (hidepid) [ SUGGESTION ]
- Checking for old files in /tmp [ OK ]
- Checking /tmp sticky bit [ OK ]
- ACL support root file system [ ENABLED ]
- Mount options of / [ NON DEFAULT ]
- Checking Locate database [ FOUND ]
- Disable kernel support of some filesystems
- Discovered kernel modules: cramfs freevxfs hfs hfsplus jffs2 udf 

[+] Storage
------------------------------------
- Checking usb-storage driver (modprobe config) [ NOT DISABLED ]
- Checking USB devices authorization [ ENABLED ]
- Checking firewire ohci driver (modprobe config) [ DISABLED ]

[+] NFS
------------------------------------
- Check running NFS daemon [ NOT FOUND ]

[+] Name services
------------------------------------
- Searching DNS domain name [ UNKNOWN ]
- Checking /etc/hosts
- Checking /etc/hosts (duplicates) [ OK ]
- Checking /etc/hosts (hostname) [ OK ]
- Checking /etc/hosts (localhost) [ SUGGESTION ]
- Checking /etc/hosts (localhost to IP) [ OK ]

[+] Ports and packages
------------------------------------
- Searching package managers
- Searching dpkg package manager [ FOUND ]
- Querying package manager
- Query unpurged packages [ NONE ]
- Checking security repository in sources.list file [ OK ]
- Checking APT package database [ OK ]
- Checking vulnerable packages [ OK ]
- Checking upgradeable packages [ SKIPPED ]
- Checking package audit tool [ INSTALLED ]

[+] Networking
------------------------------------
- Checking IPv6 configuration [ ENABLED ]
 method [ AUTO ]
 only [ NO ]
- Checking configured nameservers
- Testing nameservers
: 108.xx.xx.xx [ OK ]
: 2001:xxx:xxx:xxx::6 [ OK ]
- Minimal of 2 responsive nameservers [ OK ]
- Checking default gateway [ DONE ]
- Getting listening ports (TCP/UDP) [ DONE ]
* Found 18 ports
- Checking promiscuous interfaces [ OK ]
- Checking waiting connections [ OK ]
- Checking status DHCP client [ NOT ACTIVE ]
- Checking for ARP monitoring software [ NOT FOUND ]

[+] Printers and Spools
------------------------------------
- Checking cups daemon [ NOT FOUND ]
- Checking lp daemon [ NOT RUNNING ]

[+] Software: e-mail and messaging
------------------------------------
- Sendmail status [ RUNNING ]

[+] Software: firewalls
------------------------------------
- Checking iptables kernel module [ FOUND ]
- Checking iptables policies of chains [ FOUND ]
- Checking for empty ruleset [ OK ]
- Checking for unused rules [ FOUND ]
- Checking host based firewall [ ACTIVE ]

[+] Software: webserver
------------------------------------
- Checking Apache (binary /usr/sbin/apache2) [ FOUND ]
: No virtual hosts found
* Loadable modules [ FOUND (106) ]
- Found 106 loadable modules 
- anti-DoS/brute force [ OK ]
- web application firewall [ OK ]
- Checking nginx [ FOUND ]
- Searching nginx configuration file [ FOUND ]
- Found nginx includes [ 2 FOUND ]
- Parsing configuration options
- /etc/nginx/nginx.conf
- /etc/nginx/sites-enabled/default
- SSL configured [ YES ]
- Ciphers configured [ YES ]
- Prefer server ciphers [ YES ]
- Protocols configured [ YES ]
- Insecure protocols found [ NO ]
- Checking log file configuration
- Missing log files (access_log) [ NO ]
- Disabled access logging [ NO ]
- Missing log files (error_log) [ NO ]
- Debugging mode on error_log [ NO ]

[+] SSH Support
------------------------------------
- Checking running SSH daemon [ FOUND ]
- Searching SSH configuration [ FOUND ]
- SSH option: AllowTcpForwarding [ SUGGESTION ]
- SSH option: ClientAliveCountMax [ SUGGESTION ]
- SSH option: ClientAliveInterval [ OK ]
- SSH option: Compression [ SUGGESTION ]
- SSH option: FingerprintHash [ OK ]
- SSH option: GatewayPorts [ OK ]
- SSH option: IgnoreRhosts [ OK ]
- SSH option: LoginGraceTime [ OK ]
- SSH option: LogLevel [ SUGGESTION ]
- SSH option: MaxAuthTries [ SUGGESTION ]
- SSH option: MaxSessions [ SUGGESTION ]
- SSH option: PermitRootLogin [ SUGGESTION ]
- SSH option: PermitUserEnvironment [ OK ]
- SSH option: PermitTunnel [ OK ]
- SSH option: Port [ SUGGESTION ]
- SSH option: PrintLastLog [ OK ]
- SSH option: Protocol [ OK ]
- SSH option: StrictModes [ OK ]
- SSH option: TCPKeepAlive [ SUGGESTION ]
- SSH option: UseDNS [ OK ]
- SSH option: VerifyReverseMapping [ NOT FOUND ]
- SSH option: X11Forwarding [ SUGGESTION ]
- SSH option: AllowAgentForwarding [ SUGGESTION ]
- SSH option: AllowUsers [ NOT FOUND ]
- SSH option: AllowGroups [ NOT FOUND ]

[+] SNMP Support
------------------------------------
- Checking running SNMP daemon [ NOT FOUND ]

[+] Databases
------------------------------------
- MySQL process status [FOUND ]

[+] LDAP Services
------------------------------------
- Checking OpenLDAP instance [ NOT FOUND ]

[+] PHP
------------------------------------
- Checking PHP [ FOUND ]
- Checking PHP disabled functions [ FOUND ]
- Checking expose_php option [ OFF ]
- Checking enable_dl option [ OFF ]
- Checking allow_url_fopen option [ ON ]
- Checking allow_url_include option [ OFF ]
- Checking PHP suhosin extension status [ OK ]
- Suhosin simulation mode status [ OK ]

[+] Squid Support
------------------------------------
- Checking running Squid daemon [ NOT FOUND ]

[+] Logging and files
------------------------------------
- Checking for a running log daemon [ OK ]
- Checking Syslog-NG status [ NOT FOUND ]
- Checking systemd journal status [ FOUND ]
- Checking Metalog status [ NOT FOUND ]
- Checking RSyslog status [ FOUND ]
- Checking RFC 3195 daemon status [ NOT FOUND ]
- Checking minilogd instances [ NOT FOUND ]
- Checking logrotate presence [ OK ]
- Checking log directories (static list) [ DONE ]
- Checking open log files [ DONE ]
- Checking deleted files in use [ FILES FOUND ]

[+] Insecure services
------------------------------------
- Checking inetd status [ NOT ACTIVE ]

[+] Banners and identification
------------------------------------
- /etc/issue [ FOUND ]
- /etc/issue contents [ OK ]
- /etc/issue.net [ FOUND ]
- /etc/issue.net contents [ OK ]

[+] Scheduled tasks
------------------------------------
- Checking crontab/cronjob [ DONE ]
- Checking atd status [ RUNNING ]
- Checking at users [ DONE ]
- Checking at jobs [ NONE ]

[+] Accounting
------------------------------------
- Checking accounting information [ NOT FOUND ]
- Checking sysstat accounting data [ NOT FOUND ]
- Checking auditd [ NOT FOUND ]

[+] Time and Synchronization
------------------------------------
- NTP daemon found: ntpd [ FOUND ]
- NTP daemon found: systemd (timesyncd) [ FOUND ]
- Checking for a running NTP daemon or client [ OK ]
- Checking valid association ID's [ FOUND ]
- Checking high stratum ntp peers [ OK ]
- Checking unreliable ntp peers [ FOUND ]
- Checking selected time source [ OK ]
- Checking time source candidates [ OK ]
- Checking falsetickers [ OK ]
- Checking NTP version [ FOUND ]

[+] Cryptography
------------------------------------
- Checking for expired SSL certificates [0/1] [ NONE ]

[+] Virtualization
------------------------------------

[+] Containers
------------------------------------

[+] Security frameworks
------------------------------------
- Checking presence AppArmor [ FOUND ]
- Checking AppArmor status [ ENABLED ]
- Checking presence SELinux [ NOT FOUND ]
- Checking presence grsecurity [ NOT FOUND ]
- Checking for implemented MAC framework [ OK ]

[+] Software: file integrity
------------------------------------
- Checking file integrity tools
- Checking presence integrity tool [ NOT FOUND ]

[+] Software: System tooling
------------------------------------
- Checking automation tooling
- Automation tooling [ NOT FOUND ]
- Checking presence of Fail2ban [ FOUND ]
- Checking Fail2ban jails [ ENABLED ]
- Checking for IDS/IPS tooling [ FOUND ]

[+] Software: Malware
------------------------------------

[+] File Permissions
------------------------------------
- Starting file permissions check
/root/.ssh [ OK ]

[+] Home directories
------------------------------------
- Checking shell history files [ OK ]

[+] Kernel Hardening
------------------------------------
- Comparing sysctl key pairs with scan profile
- fs.protected_hardlinks (exp: 1) [ OK ]
- fs.protected_symlinks (exp: 1) [ OK ]
- fs.suid_dumpable (exp: 0) [ DIFFERENT ]
- kernel.core_uses_pid (exp: 1) [ DIFFERENT ]
- kernel.ctrl-alt-del (exp: 0) [ OK ]
- kernel.dmesg_restrict (exp: 1) [ DIFFERENT ]
- kernel.kptr_restrict (exp: 2) [ DIFFERENT ]
- kernel.randomize_va_space (exp: 2) [ OK ]
- kernel.sysrq (exp: 0) [ DIFFERENT ]
- net.ipv4.conf.all.accept_redirects (exp: 0) [ OK ]
- net.ipv4.conf.all.accept_source_route (exp: 0) [ OK ]
- net.ipv4.conf.all.bootp_relay (exp: 0) [ OK ]
- net.ipv4.conf.all.forwarding (exp: 0) [ OK ]
- net.ipv4.conf.all.log_martians (exp: 1) [ DIFFERENT ]
- net.ipv4.conf.all.mc_forwarding (exp: 0) [ OK ]
- net.ipv4.conf.all.proxy_arp (exp: 0) [ OK ]
- net.ipv4.conf.all.rp_filter (exp: 1) [ OK ]
- net.ipv4.conf.all.send_redirects (exp: 0) [ DIFFERENT ]
- net.ipv4.conf.default.accept_redirects (exp: 0) [ OK ]
- net.ipv4.conf.default.accept_source_route (exp: 0) [ OK ]
- net.ipv4.conf.default.log_martians (exp: 1) [ DIFFERENT ]
- net.ipv4.icmp_echo_ignore_broadcasts (exp: 1) [ OK ]
- net.ipv4.icmp_ignore_bogus_error_responses (exp: 1) [ OK ]
- net.ipv4.tcp_syncookies (exp: 1) [ DIFFERENT ]
- net.ipv4.tcp_timestamps (exp: 0) [ DIFFERENT ]
- net.ipv6.conf.all.accept_redirects (exp: 0) [ OK ]
- net.ipv6.conf.all.accept_source_route (exp: 0) [ OK ]
- net.ipv6.conf.default.accept_redirects (exp: 0) [ OK ]
- net.ipv6.conf.default.accept_source_route (exp: 0) [ OK ]

[+] Hardening
------------------------------------
- Installed compiler(s) [ FOUND ]
- Installed malware scanner [ NOT FOUND ]

[+] Custom Tests
------------------------------------
- Running custom tests...  [ NONE ]

[+] Plugins (phase 2)
------------------------------------
- Plugins (phase 2) [ DONE ]

================================================================================

...

Lynis Results 2/3 – Warnings

  Warnings (1):
  ----------------------------
  ! Found one or more vulnerable packages. [REMOVED-FIXED] 
      https://cisofy.com/controls/REMOVED-FIXED/
...

I resolved the only warning by typing

apt-get update
apt-get upgrade
shutdown -r now

After updating the Lynis system scan I re-ran the text and got

 -[ Lynis 2.5.5 Results ]-

  Great, no warnings

Lynis Results 3/3 – Suggestions

  Suggestions (44):
  ----------------------------
  * Set a password on GRUB bootloader to prevent altering boot configuration (e.g. boot in single user mode without password) [BOOT-5122] 
      https://cisofy.com/controls/BOOT-5122/

  * Configure minimum password age in /etc/login.defs [AUTH-9286] 
      https://cisofy.com/controls/AUTH-9286/

  * Configure maximum password age in /etc/login.defs [AUTH-9286] 
      https://cisofy.com/controls/AUTH-9286/

  * Default umask in /etc/login.defs could be more strict like 027 [AUTH-9328] 
      https://cisofy.com/controls/AUTH-9328/

  * Default umask in /etc/init.d/rc could be more strict like 027 [AUTH-9328] 
      https://cisofy.com/controls/AUTH-9328/

  * To decrease the impact of a full /home file system, place /home on a separated partition [FILE-6310] 
      https://cisofy.com/controls/FILE-6310/

  * To decrease the impact of a full /tmp file system, place /tmp on a separated partition [FILE-6310] 
      https://cisofy.com/controls/FILE-6310/

  * To decrease the impact of a full /var file system, place /var on a separated partition [FILE-6310] 
      https://cisofy.com/controls/FILE-6310/

  * Disable drivers like USB storage when not used, to prevent unauthorized storage or data theft [STRG-1840] 
      https://cisofy.com/controls/STRG-1840/

  * Check DNS configuration for the dns domain name [NAME-4028] 
      https://cisofy.com/controls/NAME-4028/

  * Split resolving between localhost and the hostname of the system [NAME-4406] 
      https://cisofy.com/controls/NAME-4406/

  * Install debsums utility for the verification of packages with known good database. [PKGS-7370] 
      https://cisofy.com/controls/PKGS-7370/

  * Update your system with apt-get update, apt-get upgrade, apt-get dist-upgrade and/or unattended-upgrades [PKGS-7392] 
      https://cisofy.com/controls/PKGS-7392/

  * Install package apt-show-versions for patch management purposes [PKGS-7394] 
      https://cisofy.com/controls/PKGS-7394/

  * Consider running ARP monitoring software (arpwatch,arpon) [NETW-3032] 
      https://cisofy.com/controls/NETW-3032/

  * Check iptables rules to see which rules are currently not used [FIRE-4513] 
      https://cisofy.com/controls/FIRE-4513/

  * Install Apache mod_evasive to guard webserver against DoS/brute force attempts [HTTP-6640] 
      https://cisofy.com/controls/HTTP-6640/

  * Install Apache modsecurity to guard webserver against web application attacks [HTTP-6643] 
      https://cisofy.com/controls/HTTP-6643/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : AllowTcpForwarding (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : ClientAliveCountMax (3 --> 2)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : Compression (DELAYED --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : LogLevel (INFO --> VERBOSE)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : MaxAuthTries (2 --> 1)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : MaxSessions (10 --> 2)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : PermitRootLogin (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : Port (22 --> )
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : TCPKeepAlive (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : X11Forwarding (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : AllowAgentForwarding (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Change the allow_url_fopen line to: allow_url_fopen = Off, to disable downloads via PHP [PHP-2376] 
      https://cisofy.com/controls/PHP-2376

  * Check what deleted files are still in use and why. [LOGG-2190] 
      https://cisofy.com/controls/LOGG-2190/

  * Add a legal banner to /etc/issue, to warn unauthorized users [BANN-7126] 
      https://cisofy.com/controls/BANN-7126/

  * Add legal banner to /etc/issue.net, to warn unauthorized users [BANN-7130] 
      https://cisofy.com/controls/BANN-7130/

  * Enable process accounting [ACCT-9622] 
      https://cisofy.com/controls/ACCT-9622/

  * Enable sysstat to collect accounting (no results) [ACCT-9626] 
      https://cisofy.com/controls/ACCT-9626/

  * Enable auditd to collect audit information [ACCT-9628] 
      https://cisofy.com/controls/ACCT-9628/

  * Check ntpq peers output for unreliable ntp peers and correct/replace them [TIME-3120] 
      https://cisofy.com/controls/TIME-3120/

  * Install a file integrity tool to monitor changes to critical and sensitive files [FINT-4350] 
      https://cisofy.com/controls/FINT-4350/

  * Determine if automation tools are present for system management [TOOL-5002] 
      https://cisofy.com/controls/TOOL-5002/

  * One or more sysctl values differ from the scan profile and could be tweaked [KRNL-6000] 
      https://cisofy.com/controls/KRNL-6000/

  * Harden compilers like restricting access to root user only [HRDN-7222] 
      https://cisofy.com/controls/HRDN-7222/

  * Harden the system by installing at least one malware scanner, to perform periodic file system scans [HRDN-7230] 
    - Solution : Install a tool like rkhunter, chkrootkit, OSSEC
      https://cisofy.com/controls/HRDN-7230/

  Follow-up
  ----------------------------
  - Show details of a test (lynis show details TEST-ID)
  - Check the logfile for all details (less /var/log/lynis.log)
  - Read security controls texts (https://cisofy.com)
  - Use --upload to upload data to central system (Lynis Enterprise users)

================================================================================

  Lynis security scan details

  Hardening index : 64 [############        ]
  Tests performed : 255
  Plugins enabled : 2

  Components
  - Firewall               [V]
  - Malware scanner        [X]

  Lynis Modules
  - Compliance Status      [?]
  - Security Audit         [V]
  - Vulnerability Scan     [V]

  Files
  - Test and debug information      : /var/log/lynis.log
  - Report data                     : /var/log/lynis-report.dat

================================================================================

  Lynis 2.5.5

  Auditing, system hardening, and compliance for UNIX-based systems
  (Linux, macOS, BSD, and others)

  2007-2017, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)

================================================================================

  [TIP] Enhance Lynis audits by adding your settings to custom.prf (see /linis/lynis/default.prf for all settings)

Installing a Malware Scanner

Install ClamAV

sudo apt-get install clamav

Download virus and malware definitions (this takes about 30 min)

sudo freshclam

Output:

sudo freshclam
> ClamAV Update process started at Wed Nov 15th 20:44:55 2017
> Downloading main.cvd [10%]

I had an issue on some boxes with clamav reporting I could not run freshclam

sudo freshclam
ERROR: /var/log/clamav/freshclam.log is locked by another process
ERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log).

This was fixed by typing

rm -rf /var/log/clamav/freshclam.log
sudo freshclam

Troubleshooting clamav

Clam AV does not like low ram boxes and may produce this error

Downloading main.cvd [100%]
ERROR: Database load killed by signal 9
ERROR: Failed to load new database

It looks like the solution is to increase your total ram.

fyi: Scan with ClamAV

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Re-running Lynis gave me the following malware status

- Malware scanner        [V]

Lynis Security rating

Hardening index : 69 [##############      ]

Installed

sudo apt-get install apt-show-versions
sudo apt-get install arpwatch
sudo apt-get install arpon

After re-running the test I got this Lynis security rating score (an improvement of 1)

Hardening index : 70 [#############       ]

Installed and configured debsums and auditd

sudo apt-get install debsums
sudo apt-get install audit

Now I get the following Lynis security rating score.

Hardening index : 71 [##############      ]

Conclusion

Lynis is great at performing an audit and recommending areas of work to allow you to harden your system (brute force protection, firewall, etc)

Security Don’ts

  • Never think you are done securing a system.

Security Do’s

  • Update Software (and remove software you do not use.)
  • Check Lynis Suggestions and try and resolve.
  • Security is an ongoing process, Do install a firewall, do ban bad IP’s, Do whitelist good IP’s, Do review Logs,
  • Do limit port access, make backups and keep on securing.

I will keep on securing and try and get remove all issues.

Read my past post on Securing Ubuntu in the cloud.

Scheduling an auto system updates is not enough in Ubuntu (as it is not recommended as the administrator should make decisions, not a scheduled job).

apt-get update
apt-get upgrade

fyi: CISOFY/Lynis do have paid subscriptions to have external scans of your servers: https://cisofy.com/pricing. (why upgrade?)

Lynis Plans

I will look into this feature soon.

Updating Lynis

I checked the official documentation and ran an update check

./lynis --check-update
This option is deprecated
Use: lynis update info

./lynis update info

 == Lynis ==

  Version            : 2.5.5
  Status             : Outdated
  Installed version  : 255
  Latest version     : 257
  Release date       : 2017-09-07
  Update location    : https://cisofy.com/lynis/


2007-2017, CISOfy - https://cisofy.com/lynis/

Not sure how to update?

./lynis update
Error: Need a target for update

Examples:
lynis update check
lynis update info

./lynis update check
status=outdated

I opened an issue about updating v2.5.5 here. I asked Twiter for help.

Twitter

Official Response: https://packages.cisofy.com/community/#debian-ubuntu

Git Response

Waiting..

I ended up deleting Lynis 2.5.5

ls -al
rm -R *
rm -rf *
rm -rf .git
rm -rf .gitignore
rm -rf .travis.yml
cd ..
rm -R lynis/
ls -al

Updated

./lynis update check
status=up-to-date

And reinstalled to v2.5.8

sudo git clone https://www.github.com/CISOfy/lynis

Output:

sudo git clone https://www.github.com/CISOfy/lynis
Cloning into 'lynis'...
remote: Counting objects: 8538, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 8538 (delta 0), reused 0 (delta 0), pack-reused 8534
Receiving objects: 100% (8538/8538), 3.96 MiB | 2.01 MiB/s, done.
Resolving deltas: 100% (6265/6265), done.
Checking connectivity... done.

More actions post upgrade to 2.5.8

  • Added a legal notice to “/etc/issues”, “/etc/issues.net” file’s.

Installing Lynis via apt-get instead of git clone

The official steps can be located here: https://packages.cisofy.com/community/#debian-ubuntu

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C80E383C3DE9F082E01391A0366C67DE91CA5D5F
apt install apt-transport-https
echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/99disable-translations
echo "deb https://packages.cisofy.com/community/lynis/deb/xenial main" > /etc/apt/sources.list.d/cisofy-lynis.list
apt update
apt install lynis
lynis show version

Unfortunately, I had an error with “apt update”

Error:

E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.

Complete install output

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C80E383C3DE9F082E01391A0366C67DE91CA5D5F
Executing: /tmp/tmp.Dz9g9nKV6i/gpg.1.sh --keyserver
keyserver.ubuntu.com
--recv-keys
C80E383C3DE9F082E01391A0366C67DE91CA5D5F
gpg: requesting key 91CA5D5F from hkp server keyserver.ubuntu.com
gpg: key 91CA5D5F: public key "CISOfy Software (signed software packages) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

# apt install apt-transport-https
Reading package lists... Done
Building dependency tree
Reading state information... Done
apt-transport-https is already the newest version (1.2.24).
The following packages were automatically installed and are no longer required:
  gamin libfile-copy-recursive-perl libgamin0 libglade2-0 libpango1.0-0 libpangox-1.0-0 openbsd-inetd pure-ftpd-common update-inetd
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.

# echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/99disable-translations

# echo "deb https://packages.cisofy.com/community/lynis/deb/ xenial main" > /etc/apt/sources.list.d/cisofy-lynis.list

# apt update
E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.

I reopened Github issue 491. A quick reply revealed that I did not put a space before “xenial” (oops)

fyi: I removed the dead keystore from apt by typing…

apt-key list
apt-key del 91CA5D5F
rm -rf /etc/apt/sources.list.d/cisofy-lynis.list

I can now install and update other packages with apt and not have the following error

E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.
E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.

I will remove the git clone and re-run the apt version later and put in more steps to get to a High 90’s Lynis score.

More

Read the official documentation https://cisofy.com/documentation/lynis/

Next: This guide will investigate the enterprise version of https://cisofy.com/pricing/ soon.

Hope this helps. If I have missed something please let me know on Twitter at @FearbySoftware

Donate and make this blog better



Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.46 Git hub response.

Filed Under: Advice, Cloud, Computer, Firewall, OS, Security, Server, Software, ssl, Ubuntu, VM, Vultr Tagged With: Audit, Lynis, secure, security, ubuntu

How to optimize your sites Search Engine Optimization (SEO) and grow customers without paying for Ads

September 9, 2017 by Simon

How to optimize your sites Search Engine Optimization (SEO) and grow customers without paying for Ads.

This guide is a shorter post around setting up SEO (Search Engine Optimization) and driving more traffic to your site without buying ADs.  In a nutshell, to have better SEO you need to jump some technical hurdles in order to drive more traffic to your site from search engines along with understanding your customer’s needs and making things easier for them.

I have blogged about these topics before but these posts are too long in reflection.

  • Setting up Google Analytics on your website
  • How to boost your site’s SEO
  • Improving the speed of WordPress
  • Digital marketing and user engagement 101
  • Add Google AdWords to your WordPress blog
  • etc

Buying Ad’s?

Facebook, Google, Bing and advertising agencies will recommend you set goals around growth and site traffic and pay for those goals to succeed (usually by advertisements).

Don’t get me wrong Advertising works but it is a competitive market, Online sites can easily setup the display of Ad’s on their site (my guide here Add Google AdWords to your WordPress blog, https://fearby.com/article/add-google-adwords-wordpress-blog/ ). You can buy physical billboard ad’s on the side of roads (e.g http://www.buythisspace.com.au/). I tried to enquire about the costs of a physical billboard but the agencies robot verification rejected my enquiry submission so I gave up.  Advertising is buying peoples times and people now how to avoid ad’s and not interact with them (7 Marketing Lessons from Eye-Tracking Studies https://blog.kissmetrics.com/eye-tracking-studies/)

Do more of what works

Spoiler: This guide will recommend you do more of what works over buying millions of ad’s and hoping for new and engaged customers and customer growth.

  • If you don’t already have Google Analytics setup on your site then do it, you cannot identify your customers or identify what is broken or in turn fix it (Setting up Google Analytics on your website, https://fearby.com/article/setting-up-google-analytics-on-your-website/ )
  • Monitor Data – Do review your logs and customer related data (review orders, customers and try and identify what works. Software like https://www.zoho.com/one/applications/web.html will help you connect the dots.
  • Adobe Audience Cloud: http://www.adobe.com/au/experience-cloud.html is a more expensive software suite for driving decisions based on data.
  • Benchmarks – Set goals and work toward them (e.g I want 10x more customers).

SEO Tip’s

This older article on  How to boost your site’s SEO  attempts to mention what you need to do it to get better SEO.

Do run a modern great site

I am a big fan of word of mouth over free/organic traffic over paid customers via advertising (Mostly because I am tight and realize advertising can be a bottomless pit). The single biggest thing you can do to have more organic traffic from search engines is run a modern and fast website, have valuable content and make it as easy for the customer as possible. This is why I moved my site and setup an SSL certificate (link to article).

Search engines like your site to be fast, updated frequently, have sitemaps to make their jobs easier and have an SSL certificate to keep the web safe etc.

Google, Bing and other search engines will not send traffic your way if you do not satisfy them that your site is liked or has valuable content.  Google makes money from Google Analytics by helping people understand their site’s visitors then recommend you pay for ad’s to use on sites that have AdWords on their site ( WordPress to a new self-managed server away from CPanel ).

  • How to boost your site’s SEO https://fearby.com/article/how-to-boost-your-sites-seo/
  • Your website needs to be fast, use sites like https://www.webpagetest.org to measure how fast your site is (Aim for all A’s). Read this page for information on the impact of slow websites https://www.searchenginejournal.com/mobile-page-speed-benchmarks/194511/
  • Mobile friendly – Ensure your site is mobile friendly (or risk being dropped from search engine results)
  • SSL – Do have a secure SSL certificate on your website (view mine here https://www.ssllabs.com/ssltest/analyze.html?d=www.fearby.com&s=45.63.29.217&latest).
  • Incoming links – Having incoming links to your site tell search engines that your site is popular. 

Traffic Source types

  • Organic – An organic visitor to your site is one who found your site by searching something that was relevant to their search term and not by clicking on an advertisement.
  • Paid – A paid user is someone who has clicked an ad to come to your site.
  • Social – A social visitor is one who is known to come from a social media site, using social media sites like Twitter, Facebook or Instagram is a must to driving organic traffic (go where the people are).

Engagement

How engaged are your customers?  Have you asked your customers recently what they value or appreciate about your business or product? Have you asked for feedback recently?

User Engagement Levels

  • None – Do you have landing pages that quickly inform customers of your products or services?
  • Low – What do they need to know about your product or service?
  • Medium – Aware (engaged)
  • High – Can this person be an advocate for your business?
  • Gone – Did you get exit Feedback?

Ways to engage already engaged customers.

  • Setup a free MailChimp Newsletter to allow willing people to be alerted of new communication https://login.mailchimp.com/signup/?source=website&pid=GAW
  • Web Browser popup Alerts can be a great way to engage with users when new content is added to your site (Read the guide here https://documentation.onesignal.com/docs/web-push-setup )
  • Mobile apps or mobile friendly website are a no brainer given 2 billion people use mobile phones ( http://www.smartinsights.com/mobile-marketing/mobile-marketing-analytics/mobile-marketing-statistics/ ).

What can you do to help understand your customer’s needs and make their purchase processes easier?

Why are your customers leaving?

Understand more about your customers reasons for leaving and act upon preventing others from leaving.

  • Trying something new (Does your website need to be simpler?)
  • Are your products too expensive?.
  • Your site (or ordering) is not convenient (Do you need to setup online ordering/subscriptions and delivery?)
  • etc

Who are your customers

  • Personas – Do setup customer personas in order to focus on your customer segments (get a free customer persona template here https://blog.hubspot.com/blog/tabid/6307/bid/33491/everything-marketers-need-to-research-create-detailed-buyer-personas-template.aspx )
  • Does your website match these personas?

Are your customers.

  • Engaged
  • Informed
  • Advocates

Feedback

  • Do you have feedback loops (A simple feedback form can solve this)?

What do you know about your customers?

  • Product Satisfaction
  • Product Loyalty
  • Product Awareness

Paid Traffic (Ad’s)

  • Google Ad’s – Signup Here http://www.google.com.au/adwords/get-started/
  • Bing – Advertise on Bing here https://advertise.bingads.microsoft.com/
  • Facebook – Advertise on Facebook here https://www.facebook.com/business/products/ads

Free Traffic (SEO + Organic Ad’s)

  • Blog Posts (Sharing value/passion)
  • Social Media Posts (use hashtags)
  • Instagram (Post value/passion)

Most importantly Do what works (Measure and replicate).

Focus on Business Value

Generate a  SWOT Analysis ( Free tool here https://xtensio.com/ )

  • What are your Strengths?
  • What are your Weaknesses?
  • What are your Opportunities?
  • What are your Threats?

Goals

Goals allow you to investigate, learn, act and measure I order to improve.

  • Investigate – Data.
  • Learn/Insight – Make Assumptions.
  • Act – Act and measure.

Read more about customer engagement here https://en.wikipedia.org/wiki/Customer_engagement

Bonus

 Do ensure your website is compliant with accessibility and technical standards

  • Test our sites Accessibility – https://achecker.ca/checker/index.php
  • Test your sites HTML5 Compliance – https://validator.w3.org
  • Test your Google PageSpeed Test – https://developers.google.com/speed/pagespeed/insights/
  • Do A B testing to determine the statistical significance of changes to your site.

Conclusion

The more you know the better you can connect, Do set goals and as a minimum setup Google Analytics, SSL certificate and submit your site to search engines, then focus on a fast site that makes things simple for your customers.

Donate and make this blog better



Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.0 Initial version

Filed Under: Ads, Analytics, Business, LetsEncrypt, SEO, ssl, Website Tagged With: analytics, seo, ssl

Securing Ubuntu in the cloud

August 9, 2017 by Simon

It is easy to deploy servers to the cloud within a few minutes, you can have a cloud-based server that you (or others can use). ubuntu has a great guide on setting up basic security issues but what do you need to do.

If you do not secure your server expects it to be hacked into. Below are tips on securing your cloud server.

First, read more on scanning your server with Lynis security scan.

Always use up to date software

Always use update software, malicious users can detect what software you use with sites like shodan.io (or use port scan tools) and then look for weaknesses from well-published lists (e.g WordPress, Windows, MySQL, node, LifeRay, Oracle etc). People can even use Google to search for login pages or sites with passwords in HTML (yes that simple).  Once a system is identified by a malicious user they can send automated bots to break into your site (trying millions of passwords a day) or use tools to bypass existing defences (Security researcher Troy Hunt found out it’s child’s play).

Portscan sites like https://mxtoolbox.com/SuperTool.aspx?action=scan are good for knowing what you have exposed.

You can also use local programs like nmap to view open ports

Instal nmap

sudo apt-get install nmap

Find open ports

nmap -v -sT localhost

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-08 23:57 AEST
Initiating Connect Scan at 23:57
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 9101/tcp on 127.0.0.1
Discovered open port 9102/tcp on 127.0.0.1
Discovered open port 9103/tcp on 127.0.0.1
Completed Connect Scan at 23:57, 0.05s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00020s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
3306/tcp open  mysql
9101/tcp open  jetdirect
9102/tcp open  jetdirect
9103/tcp open  jetdirect

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Limit ssh connections

Read more here.

Use ufw to set limits on login attempts

sudo ufw limit ssh comment 'Rate limit hit for openssh server'

Only allow known IP’s access to your valuable ports

sudo ufw allow from 123.123.123.123/32 to any port 22

Delete unwanted firewall rules

sudo ufw status numbered
sudo ufw delete 8

Only allow known IP’s to certain ports

sudo ufw allow from 123.123.123.123 to any port 80/tcp

Also, set outgoing traffic to known active servers and ports

sudo ufw allow out from 123.123.123.123 to any port 22

Don’t use weak/common Diffie-Hellman key for SSL certificates, more information here.

openssl req -new -newkey rsa:4096 -nodes -keyout server.key -out server.csr
 
Generating a 4096 bit RSA private key
...

More info on generating SSL certs here and setting here and setting up Public Key Pinning here.

Intrusion Prevention Software

Do run fail2ban: Guide here https://www.linode.com/docs/security/using-fail2ban-for-security

I use iThemes Security to secure my WordPress and block repeat failed logins from certain IP addresses.

iThemes Security can even lock down your WordPress.

You can set iThemes to auto lock out users on x failed logins

Remember to use allowed whitelists though (it is so easy to lock yourself out of servers).

Passwords

Do have strong passwords and change the root password provided by the hosts. https://howsecureismypassword.net/ is a good site to see how strong your password is from brute force password attempts. https://www.grc.com/passwords.htm is a good site to obtain a strong password.  Do follow Troy Hunt’s blog and twitter account to keep up to date with security issues.

Configure a Firewall Basics

You should install a firewall on your Ubuntu and configure it and also configure a firewall with your hosts (e.g AWS, Vultr, Digital Ocean).

Configure a Firewall on AWS

My AWS server setup guide here. AWS allow you to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

Configure a Firewall on Digital Ocean

Configuring a firewall on Digital Ocean (create a $5/m server here).  You can configure your Digital Ocean droplet firewall by clicking Droplet, Networking then Manage Firewall after logging into Digital Ocean.

Configure a Firewall on Vultr

Configuring a firewall on Vultr (create a $2.5/m server here).

Don’t forget to set IP rules for IPV4 and IPV6, Only set the post you need to allow and ensure applications have strong passwords.

Ubuntu has a firewall built in (documentation).

sudo ufw status

Enable the firewall

sudo ufw enable

Adding common ports

sudo ufw allow ssh/tcp
sudo ufw logging on
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 53
sudo ufw allow 443
sudo ufw allow 873
sudo ufw enable
sudo ufw status
sudo ufw allow http
sudo ufw allow https

Add a whitelist for your IP (use http://icanhazip.com/ to get your IP) to ensure you won’t get kicked out of your server.

sudo ufw allow from 123.123.123.123/24 to any port 22

More help here.  Here is a  good guide on ufw commands. Info on port numbers here.

https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

If you don’t have a  Digital Ocean server for $5 a month click here and if a $2.5 a month Vultr server here.

Backups

rsync is a good way to copy files to another server or use Bacula

sudo apt install bacula

Basics

Initial server setup guide (Digital Ocean).

Sudo (admin user)

Read this guide on the Linux sudo command (the equivalent if run as administrator on Windows).

Users

List users on an Ubuntu OS (or compgen -u)

cut -d: -f1 /etc/passwd

Common output

cut -d: -f1 /etc/passwd
root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy
www-data
backup
list
irc
gnats
nobody
systemd-timesync
systemd-network
systemd-resolve
systemd-bus-proxy
syslog
_apt
lxd
messagebus
uuidd
dnsmasq
sshd
pollinate
ntp
mysql
clamav

Add User

sudo adduser new_username

e.g

sudo adduser bob
Adding user `bob' ...
Adding new group `bob' (1000) ...
Adding new user `bob' (1000) with group `bob' ...
Creating home directory `/home/bob' ...
etc..

Add user to a group

sudo usermod -a -G MyGroup bob

Show users in a group

getent group MyGroup | awk -F: '{print $4}'

This will show users in a group

Remove a user

sudo userdel username
sudo rm -r /home/username

Rename user

usermod -l new_username old_username

Change user password

sudo passwd username

Groups

Show all groups

compgen -ug

Common output

compgen -g
root
daemon
bin
sys
adm
tty
disk
lp
mail
proxy
sudo
www-data
backup
irc
etc

You can create your own groups but first, you must be aware of group ids

cat /etc/group

Then you can see your systems groups and ids.

Create a group

groupadd -g 999 MyGroup

Permissions

Read this https://help.ubuntu.com/community/FilePermissions

How to list users on Ubuntu.

Read more on setting permissions here.

Chmod help can be found here.

Install Fail2Ban

I used this guide on installing Fail2Ban.

apt-get install fail2ban

Check Fail2Ban often and add blocks to the firewall of known bad IPs

fail2ban-client status

Best practices

Ubuntu has a guide on basic security setup here.

Startup Processes

It is a good idea to review startup processes from time to time.

sudo apt-get install rcconf
sudo rcconf

Accounts

  • Read up on the concept of least privilege access for apps and services here.
  • Read up on chmod permissions.

Updates

Do update your operating system often.

sudo apt-get update
sudo apt-get upgrade

Minimal software

Only install what software you need

Exploits and Keeping up to date

Do keep up to date with exploits and vulnerabilities

  • Follow 0xDUDE on twitter.
  • Read the GDI.Foundation page.
  • Visit the Exploit Database
  • Vulnerability & Exploit Database
  • Subscribe to the Security Now podcast.

Secure your applications

  • NodeJS: Enable logging in applications you install or develop.

Ban repeat Login attempts with FailBan

Fail2Ban config

sudo nano /etc/fail2ban/jail.conf
[sshd]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3

Hosts File Hardening

sudo nano /etc/host.conf

Add

order bind,hosts
nospoof on

Add a whitelist with your ip on /etc/fail2ban/jail.conf (see this)

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not                          
# ban a host which matches an address in this list. Several addresses can be                             
# defined using space separator.
                                                                         
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8

Restart the service

sudo service fail2ban restart
sudo service fail2ban status

Intrusion detection (logging) systems

Tripwire will not block or prevent intrusions but it will log and give you a heads up with risks and things of concern

Install Tripwire.

sudo apt-get install tiger tripwire

Running Tripwire

sudo tiger

This will scan your system for issues of note

sudo tiger
Tiger UN*X security checking system
   Developed by Texas A&M University, 1994
   Updated by the Advanced Research Corporation, 1999-2002
   Further updated by Javier Fernandez-Sanguino, 2001-2015
   Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
   Covered by the GNU General Public License (GPL)

Configuring...

Will try to check using config for 'x86_64' running Linux 4.4.0-89-generic...
--CONFIG-- [con005c] Using configuration files for Linux 4.4.0-89-generic. Using
           configuration files for generic Linux 4.
Tiger security scripts *** 3.2.3, 2008.09.10.09.30 ***
20:42> Beginning security report for simon.
20:42> Starting file systems scans in background...
20:42> Checking password files...
20:42> Checking group files...
20:42> Checking user accounts...
20:42> Checking .rhosts files...
20:42> Checking .netrc files...
20:42> Checking ttytab, securetty, and login configuration files...
20:42> Checking PATH settings...
20:42> Checking anonymous ftp setup...
20:42> Checking mail aliases...
20:42> Checking cron entries...
20:42> Checking 'services' configuration...
20:42> Checking NFS export entries...
20:42> Checking permissions and ownership of system files...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Checking for indications of break-in...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Performing rootkit checks...
20:42> Performing system specific checks...
20:46> Performing root directory checks...
20:46> Checking for secure backup devices...
20:46> Checking for the presence of log files...
20:46> Checking for the setting of user's umask...
20:46> Checking for listening processes...
20:46> Checking SSHD's configuration...
20:46> Checking the printers control file...
20:46> Checking ftpusers configuration...
20:46> Checking NTP configuration...
20:46> Waiting for filesystems scans to complete...
20:46> Filesystems scans completed...
20:46> Performing check of embedded pathnames...
20:47> Security report completed for simon.
Security report is in `/var/log/tiger/security.report.simon.170809-20:42'.

My Output.

sudo nano /var/log/tiger/security.report.username.170809-18:42

Security scripts *** 3.2.3, 2008.09.10.09.30 ***
Wed Aug  9 18:42:24 AEST 2017
20:42> Beginning security report for username (x86_64 Linux 4.4.0-89-generic).

# Performing check of passwd files...
# Checking entries from /etc/passwd.
--WARN-- [pass014w] Login (bob) is disabled, but has a valid shell.
--WARN-- [pass014w] Login (root) is disabled, but has a valid shell.
--WARN-- [pass015w] Login ID sync does not have a valid shell (/bin/sync).
--WARN-- [pass012w] Home directory /nonexistent exists multiple times (3) in
         /etc/passwd.
--WARN-- [pass012w] Home directory /run/systemd exists multiple times (2) in
         /etc/passwd.
--WARN-- [pass006w] Integrity of password files questionable (/usr/sbin/pwck
         -r).

# Performing check of group files...

# Performing check of user accounts...
# Checking accounts from /etc/passwd.
--WARN-- [acc021w] Login ID dnsmasq appears to be a dormant account.
--WARN-- [acc022w] Login ID nobody home directory (/nonexistent) is not
         accessible.

# Performing check of /etc/hosts.equiv and .rhosts files...

# Checking accounts from /etc/passwd...

# Performing check of .netrc files...

# Checking accounts from /etc/passwd...

# Performing common access checks for root (in /etc/default/login, /securetty, and /etc/ttytab...
--WARN-- [root001w] Remote root login allowed in /etc/ssh/sshd_config

# Performing check of PATH components...
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
# Only checking user 'root'

# Performing check of anonymous FTP...

# Performing checks of mail aliases...
# Checking aliases from /etc/aliases.

# Performing check of `cron' entries...
--WARN-- [cron005w] Use of cron is not restricted

# Performing check of 'services' ...
# Checking services from /etc/services.
--WARN-- [inet003w] The port for service ssmtp is also assigned to service
         urd.
--WARN-- [inet003w] The port for service pipe-server is also assigned to
         service search.

# Performing NFS exports check...

# Performing check of system file permissions...
--ALERT-- [perm023a] /bin/su is setuid to `root'.
--ALERT-- [perm023a] /usr/bin/at is setuid to `daemon'.
--ALERT-- [perm024a] /usr/bin/at is setgid to `daemon'.
--WARN-- [perm001w] The owner of /usr/bin/at should be root (owned by daemon).
--WARN-- [perm002w] The group owner of /usr/bin/at should be root.
--ALERT-- [perm023a] /usr/bin/passwd is setuid to `root'.
--ALERT-- [perm024a] /usr/bin/wall is setgid to `tty'.

# Checking for known intrusion signs...
# Testing for promiscuous interfaces with /bin/ip
# Testing for backdoors in inetd.conf

# Performing check of files in system mail spool...

# Performing check for rookits...
# Running chkrootkit (/usr/sbin/chkrootkit) to perform further checks...
--WARN-- [rootkit004w] Chkrootkit has detected a possible rootkit installation
Possible Linux/Ebury - Operation Windigo installetd

# Performing system specific checks...
# Performing checks for Linux/4...

# Checking boot loader file permissions...
--WARN-- [boot02] The configuration file /boot/grub/menu.lst has group
         permissions. Should be 0600
--FAIL-- [boot02] The configuration file /boot/grub/menu.lst has world
         permissions. Should be 0600
--WARN-- [boot06] The Grub bootloader does not have a password configured.

# Checking for vulnerabilities in inittab configuration...

# Checking for correct umask settings for init scripts...
--WARN-- [misc021w] There are no umask entries in /etc/init.d/rcS

# Checking Logins not used on the system ...

# Checking network configuration
--FAIL-- [lin013f] The system is not protected against Syn flooding attacks
--WARN-- [lin017w] The system is not configured to log suspicious (martian)
         packets

# Verifying system specific password checks...

# Checking OS release...
--WARN-- [osv004w] Unreleased Debian GNU/Linux version `stretch/sid'

# Checking installed packages vs Debian Security Advisories...

# Checking md5sums of installed files

# Checking installed files against packages...
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/udev/hwdb.bin' does not belong to any package.

# Performing check of root directory...

# Checking device permissions...
--WARN-- [dev003w] The directory /dev/block resides in a device directory.
--WARN-- [dev003w] The directory /dev/char resides in a device directory.
--WARN-- [dev003w] The directory /dev/cpu resides in a device directory.
--FAIL-- [dev002f] /dev/fuse has world permissions
--WARN-- [dev003w] The directory /dev/hugepages resides in a device directory.
--FAIL-- [dev002f] /dev/kmsg has world permissions
--WARN-- [dev003w] The directory /dev/lightnvm resides in a device directory.
--WARN-- [dev003w] The directory /dev/mqueue resides in a device directory.
--FAIL-- [dev002f] /dev/rfkill has world permissions
--WARN-- [dev003w] The directory /dev/vfio resides in a device directory.

# Checking for existence of log files...
--FAIL-- [logf005f] Log file /var/log/btmp permission should be 660
--FAIL-- [logf007f] Log file /var/log/messages does not exist

# Checking for correct umask settings for user login shells...
--WARN-- [misc021w] There is no umask definition for the dash shell
--WARN-- [misc021w] There is no umask definition for the bash shell

# Checking symbolic links...

# Performing check of embedded pathnames...
20:47> Security report completed for username.

More on Tripwire here.

Hardening PHP

Hardening PHP config (and backing the PHP config it up), first create an info.php file in your website root folder with this info

<?php
phpinfo()
?>

Now look for what PHP file is loadingPHP Config

Back that your PHP config file

TIP: Delete the file with phpinfo() in it as it is a security risk to leave it there.

TIP: Read the OWASP cheat sheet on using PHP securely here and securing php.ini here.

Some common security changes

file_uploads = On
expose_php = Off
error_reporting = E_ALL
display_errors          = Off
display_startup_errors  = Off
log_errors              = On
error_log = /php_errors.log
ignore_repeated_errors  = Off

Don’t forget to review logs, more config changes here.

Antivirus

Yes, it is a good idea to run antivirus in Ubuntu, here is a good list of antivirus software

I am installing ClamAV as it can be installed on the command line and is open source.

sudo apt-get install clamav

ClamAV help here.

Scan a folder

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Setup auto-update antivirus definitions

sudo dpkg-reconfigure clamav-freshclam

I set auto updates 24 times a day (every hour) via daemon updates.

tip: Download manual antivirus update definitions. If you only have a 512MB server your update may fail and you may want to stop fresh claim/php/nginx and mysql before you update to ensure the antivirus definitions update. You can move this to a con job and set this to update at set times over daemon to ensure updates happen.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v
Current working dir is /var/lib/clamav
Max retries == 5
ClamAV update process started at Tue Aug  8 22:22:02 2017
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1152
Software version from DNS: 0.99.2
Retrieving http://db.au.clamav.net/main.cvd
Trying to download http://db.au.clamav.net/main.cvd (IP: 193.1.193.64)
Downloading main.cvd [100%]
Loading signatures from main.cvd
Properly loaded 4566249 signatures from new main.cvd
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Querying main.58.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/daily.cvd
Trying to download http://db.au.clamav.net/daily.cvd (IP: 193.1.193.64)
Downloading daily.cvd [100%]
Loading signatures from daily.cvd
Properly loaded 1742284 signatures from new daily.cvd
daily.cvd updated (version: 23644, sigs: 1742284, f-level: 63, builder: neo)
Querying daily.23644.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/bytecode.cvd
Trying to download http://db.au.clamav.net/bytecode.cvd (IP: 193.1.193.64)
Downloading bytecode.cvd [100%]
Loading signatures from bytecode.cvd
Properly loaded 66 signatures from new bytecode.cvd
bytecode.cvd updated (version: 308, sigs: 66, f-level: 63, builder: anvilleg)
Querying bytecode.308.82.1.0.C101C140.ping.clamav.net
Database updated (6308599 signatures) from db.au.clamav.net (IP: 193.1.193.64)

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart 

sudo /etc/init.d/clamav-freshclam start

Manual scan with a bash script

Create a bash script

mkdir /script
sudo nano /scripts/updateandscanav.sh

# Include contents below.
# Save and quit

chmod +X /scripts/updateandscanav.sh

Bash script contents to update antivirus definitions.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart

sudo /etc/init.d/clamav-freshclam start

sudo clamscan --max-filesize=3999M --max-scansize=3999M -v -r /

Edit the crontab to run the script every hour

crontab -e
1 * * * * /bin/bash /scripts/updateandscanav.sh > /dev/null 2>&1

Uninstalling Clam AV

You may need to uninstall Clamav if you don’t have a lot of memory or find updates are too big.

sudo apt-get remove --auto-remove clamav
sudo apt-get purge --auto-remove clamav

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Other

  • Read this awesome guide.
  • install Fail2Ban
  • Do check your log files if you suspect suspicious activity.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.92 added hardening a linux server link

Filed Under: Ads, Advice, Analitics, Analytics, Android, API, App, Apple, Atlassian, AWS, Backup, BitBucket, Blog, Business, Cache, Cloud, Community, Computer, CoronaLabs, Cost, CPI, DB, Development, Digital Ocean, DNS, Domain, Email, Feedback, Firewall, Free, Git, GitHub, GUI, Hosting, Investor, IoT, JIRA, LetsEncrypt, Linux, Malware, Marketing, mobile app, Monatization, Monetization, MongoDB, MySQL, Networking, NGINX, NodeJS, NoSQL, OS, Planning, Project, Project Management, Psychology, push notifications, Raspberry Pi, Redis, Route53, Ruby, Scalability, Scalable, Security, SEO, Server, Share, Software, ssl, Status, Strength, Tech Advice, Terminal, Transfer, Trello, Twitter, Ubuntu, Uncategorized, Video Editing, VLOG, VM, Vultr, Weakness, Web Design, Website, Wordpress Tagged With: antivirus, brute force, Firewall

Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute

July 29, 2017 by Simon

I visited https://letsencrypt.org/ where it said Let’s Encrypt is a free, automated, and open SSL Certificate Authority. That sounds great, time to check them out. This may not take 1 minute on your server but it did on mine (a self-managed Ubuntu 16.04/NGINX server). If you are not sure why you need an SSL cert read Life Is About to Get a Whole Lot Harder for Websites Without Https from Troy hunt.

FYI you can set up an Ubuntu Vutur VM here (my guide here) for as low as $2.5 a month or a Digital Ocean VM server here (my guide here) for $5 a month, billing is charged to the hour and is cheap as chips.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

But for the best performing server read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here). Also read my recent post on setting up Lets Encrypt on sub domains.

I clicked Get Started and read the Getting started guide. I was redirected to https://certbot.eff.org/ where it said: “Automatically enable HTTPS on your website with EFF’s Certbot, deploying Let’s Encrypt certificates.“. I was asked what web server and OS I use..

I confirmed my Linux version

lsb_release -a

Ensure your NGINX is setup (read my Vultr guide here) and you have a”server_name” specified in the “/etc/nginx/sites-available/default” file.

e.g

server_name yourdomain.com www.yourdomain.com;

I also like to set “root” to “/www” in the NGINX configuration.

e.g

root /www;

Tip: Ensure the www folder is set up first and has ownership.

mkdir /www
sudo chown -R www-data:www-data /www

Also, make and verify the contents of a /www /index.html file.

echo "Hello World..." > /www/index.html && cat /www/index.html

I then selected my environment on the site (NGINX and Ubuntu 16.04) and was redirected to the setup instructions.

FYI: I will remove mention of my real domain and substitute with thesubdomain.thedomain.com for security in the output below.

I was asked to run these commands

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx

Detailed instructions here.

Obtaining an SSL Certificate

I then ran the following command to automatically obtain and install (configure NGINX) an SSL certificate.

sudo certbot --nginx

Output

sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel):Invalid email address: .
Enter email address (used for urgent renewal and security notices)  If you
really want to skip this, you can run the client with
--register-unsafely-without-email but make sure you then backup your account key
from /etc/letsencrypt/accounts   (Enter 'c' to cancel): [email protected]

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf. You must agree
in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: A

-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
-------------------------------------------------------------------------------
(Y)es/(N)o: Y

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: thesubdomain.thedomain.com
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):1
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for thesubdomain.thedomain.com
Waiting for verification...
Cleaning up challenges
Deployed Certificate to VirtualHost /etc/nginx/sites-enabled/default for set(['thesubdomain.thedomain.com', 'localhost'])
Please choose whether HTTPS access is required or optional.
-------------------------------------------------------------------------------
1: Easy - Allow both HTTP and HTTPS access to these sites
2: Secure - Make all requests redirect to secure HTTPS access
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/default

-------------------------------------------------------------------------------
Congratulations! You have successfully enabled https://thesubdomain.thedomain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=thesubdomain.thedomain.com
-------------------------------------------------------------------------------

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem. Your cert will expire on 2017-10-27. To obtain a new or tweaked version
   of this certificate in the future, simply run certbot again with
   the "certonly" option. To non-interactively renew *all* of your
   certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

That was the easiest SSL cert generation in history.

SSL Certificate Renewal (dry run)

sudo certbot renew --dry-run

Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/thesubdomain.thedomain.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for thesubdomain.thedomain.com
Waiting for verification...
Cleaning up challenges

-------------------------------------------------------------------------------
new certificate deployed with reload of nginx server; fullchain is
/etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem
-------------------------------------------------------------------------------
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)

IMPORTANT NOTES:
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.

SSL Certificate Renewal (Live)

certbot renew

The Lets Encrypt SSL certificate is only a 90-day certificate.

Again: The Lets Encrypt SSL certificate is only a 90-day certificate.

I’ll run “certbot renew” again 2 months time to manually renew the certificate (and configure my higher security configuration (see below)).

Certbot NGINX Config renew (what did it do)

It’s nice to see forces HTTPS added to the configuration

if ($scheme != "https") {
   return 301 https://$host$request_uri;
} # managed by Certbot

Cert stuff added

    listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/thesubdomain.thedomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

Contents of /etc/letsencrypt/options-ssl-nginx.conf

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";

This contains too many legacy cyphers for my liking.

I changed /etc/letsencrypt/options-ssl-nginx.conf to tighten ciphers and add TLS 1.3 (as my NGINX Supports it).

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

Enabling OCSP Stapling and Strict Transport Security in NGINX

I add the following to /etc/nginx/sites/available/default

# OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Restart NGINX.

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

SSL Labs SSL Score

I am happy with this.

Read my guide on Beyond SSL with Content Security Policy, Public Key Pinning etc

Automatic SSL Certificate Renewal

There are ways to auto renew the SSL certs floating around YouTube but I’ll stick to manual issue and renewals of SSL certificates.

SSL Checker Reports

‘I checked the certificate with other SSL checking sites.

NameCheap SSL Checker – https://decoder.link/sslchecker/ (Passed). I did notice that the certificate will expire in 89 days (I was not aware of that). I guess a free 90-day certificate for a noncritical server is OK (as long as I renew it in time).

CertLogik – https://certlogik.com/ssl-checker/ (OK)

Comodo – https://sslanalyzer.comodoca.com (OK)

Lets Encrypt SSL Certificate Pros

  • Free.
  • Secure.
  • Easy to install.
  • Easy to renew.
  • Good for local, test or development environments.
  • It auto-detected my domain name (even a subdomain)

Lets Encrypt SSL Certificate Cons

  • The auto install process does not setup OCSP Stapling (I configured NGINX but the certificate does not support it may be to limit the Certificate Authority resources handing the certificate revocation checks).
  • The auto install process does not setup HSTS. (I enabled it in NGINX manually).
  • The auto install process does not setup HPKP. More on enabling Public Key Pinning in NGINX here.
  • Too many cyphers installed by default.
  • No TLS 1.3 installed by default by the install process in my NGINX config in the default certbot secure auto install (even though my NGINX supports it). More on enabling TLS 1.3 in NGINX here.

Read my guide on Beyond SSL with Content Security Policy, Public Key Pinning etc

I’d recommend you follow these Twitter security users

http://twitter.com/GibsonResearch

https://twitter.com/troyhunt

https://twitter.com/0xDUDE

Troubleshooting

I had one server were certbot failed to verify the SSL and said I needed a public routable IP (it was) and that the firewall needed to be disabled (it was). I checked the contents of “/etc/nginx/sites-available/default” and it appeared no additional SSL values were added (not even listening on port 443?????).

Certbot Error

I am viewing: /var/log/letsencrypt/letsencrypt.log

Forcing Certificate Renewal 

Run the following command to force a certificate to renew outside the crontab renewal window.

certbot renew --force-renew

Conclusion

Free is free but I’d still use paid certs from Namecheap for important stuff/sites, not having OCSP stapling on the CA and 90-day certs is a deal breaker for me. The Lets Encrypt certificate is only a 90-day certificate (I’d prefer a 3-year certificate).

A big thank you to Electronic Frontier Foundation for making this possible and providing a free service (please donate to them)..

Lets Encrypt does recommend you renew certs every 60 days or use auto-renew tools but rate limits are in force and Lets Encrypt admit their service is young (will they stick around)? Even Symantec SSL certs are at risk.

Happy SSL’ing.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

fyi, I followed this guide setting up Let’s Encrypt on Ubuntu 18.04.

Read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.8 Force Renew Command

v1.7 Ubuntu 18.94 info

V1.62 added hardening Linux server link

Filed Under: AWS, Cloud, Cost, Digital Ocean, LetsEncrypt, ssl, Ubuntu, VM, Vultr Tagged With: free, lets encrypt, ssl certificate

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

Digital disruption or digital tinkering

December 20, 2016 by Simon

The biggest buzzwords used by prime ministers, presidents or management these days has been “Innovation” and “digital disruption”.  As a developer or manager do you understand what goes into a new digital customer-focused service like an API or data-driven portal? How well is your business or products doing in the age of innovation and digital disruption? Do you listen to what your customers want or need?

When to Pivot

There comes a time when businesses realize they need to pivot in order to stay viable.

  • People don’t rent VHS movies they download movies from the Internet
  • Printing photographs, who does that anymore?
  • People learn from videos on YouTube, Khan Academy for free or pay for courses from, Pluralsight.com, Coursea.org, Udemy.com of Linda.com.
  • Information is wanted 24/7 and a call to customer service if information cannot be sourced online.

kodak-bankruptcy

Image source.

Chances are 90% of your customers are using mobile or tablet devices on any given day.  If you are not interacting with your customers via personalized/mobile technology prepare to be overtaken as a business.

Pivoting may require you to admit you are behind the eight ball and take a risk and set up a new customer-focused web portal, API, app or services.  Make sure you know what you need before the lure of services, buzzwords and  “shiny object syndrome” from innovation blog posts and consultants take hold.

Advocates and blockers of change

Creating change in an organization that bean counts every dollar, exterminates all risk and ignore ideas is a hard sell. How do you get support from this with power and endless rolls of red tape?

Bad reasons for saying no to innovation:

  • You can’t create a mobile app to help customers because the use of our logo won’t be approved.
  • Don’t focus on customer-focused automation, analytics and innovation because internal manual processes need attention first.
  • Possible changes in 2 years outside of our control will possibly impact anything we create.
  • Third eyelids.

Management’s support of experimentation and change is key to innovation. HARVARD BUSINESS REVIEW have a great post on this: The Why, What, and How of Management Innovation.

Does your organization value innovation? This is possibly the best video that describes how the best businesses focus on innovation and take risks. Simon Sinek: How great leaders inspire action.

Here is a great post on How To Identify The Most Dangerous Person In Your Company who blocks innovation and change.

Also a few videos on getting staff on board and motivation and productivity.

Project Perspectives
consultant_001

Project Focus

  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.

I said that three times (because it is important).

Before you begin coding, learn from those who have failed

Here are some of the best tips I have collected from start-ups who have failed.

  • We didn’t spend enough time talking with customers and we’re rolling out features that I thought were great, but we didn’t gather enough input from clients. We didn’t realize it until it was too late. It’s easy to get tricked into thinking your thing is cool. You have to pay attention to your customers and adapt to their needs.
  • The cloud is great. Outsourcing is great. Unreliable services aren’t. The bottom line is that no one cares about your data more than you do – there is no replacement for a robust due diligence process and robust thought about avoiding reliance on any one vendor.
  • Your heart doesn’t get satisfied with any levels of development. Ignore your heart. Listen to your brain.
  • You can always iterate and extrapolate later. Wet your feet asap.
  • As the product became more and more complex, the performance degraded. In my mind, speed is a feature for all web apps so this was unacceptable, especially since it was used to run live, public websites. We spent hundreds of hours trying to speed up the app with little success. This taught me that we needed to having benchmarking tools incorporated into the development cycle from the beginning due to the nature of our product.
  • It’s not about good ideas or bad ideas: it’s about ideas that make people talk. Make some aspect of your product easy and fun to talk about, and make it unique.
  • We really didn’t test the initial product enough. The team pulled the trigger on its initial launches without a significant beta period and without spending a lot of time running QA, scenario testing, task-based testing and the like. When v1.0 launched, glitches and bugs quickly began rearing their head (as they always do), making for delays and laggy user experiences aplenty — something we even mentioned in our early coverage.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • Our biggest self-realization was that we were not users of our own product. We didn’t obsess over it and we didn’t love it. We loved the idea of it. That hurt.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.
  • It may seem surprising that a seemingly successful product could fail, but it happens all the time. Although we arguably found product/market fit, we couldn’t quite crack the business side of things. Building any business is hard, but building a business with a single app offering and half of your runway is especially hard.

Buzzwords

The Innovation landscape is full of buzzwords, here are just a few you will need to know.

  • API – Application Program Interface is a method that uses web address ( http://www.server.com/api/important/action/) to accept requests and deliver results. Learn more about API’s here http://www.programmableweb.com/category/all/apis
  • AR – Augmented reality is where you use a screen on a mobile, tablet or PC to overlay 3D or geospatial information.
  • Big Data – Is about taking a wider view of your business data to find insights and to predict and improve products and services.
  • BYOD – Bring your own device.
  • BYOC – Bring your own cloud.
  • Caching – Using software to deliver data from memory rather than from slower database each time.
  • Cloud – Someone else’s computer that you run software or services on.
  • CouchDB – An Apache designed Key/Value NoSQL JSON store database that focuses on eventual replication.
  • DaaS – Desktop as a service
  • DbaaS – Database as a service (hardware and database software maintained by others but your data).
  • DBMS – Database Management System – the GUI
  • HPC – High-Performance Computing.
  • IaaS – Cloud-based Servers and infrastructure (Google Cloud, Amazon AWS, Digital Ocean and Vultr and Rackspace).
  • IDaaS – Third Party Authorisation management
  • IOPS – Operations per Second –What limitations are on the interface or software in question.
  • IoT – Internet of things are small devices that can display, sense or update information (internet-connected fridge or a button that orders more toilet paper.
  • iPaaS– integration Platform as a Service (software to integrate multiple XaaS)
  • JSON – A better CSV file (read more here)
  • MaaS – Monitoring as a Service (e.g Keymetrics.io)
  • CaaS – Communication as a service (e.g http://www.twillio.com)
  • Micro-services – an existing service that is managed by another vendor (e.g Notifications, login management, email or storage), usually charged by usage.
  • MongoDB – Another Key/Value NoSQL JSON Database that has upfront Replication
  • NoSQL – A No SQL database that stores data in JSON documents instead of normalised related tables.
  • PaaS – A larger stack of SaaS that you can customise from vendors Azure (Active Directory, Compute, Storage Blobs etc), AWS (SQS, RDS, Alasticache, Elastic File System, ), Google Cloud (Compute Engine, App Engine, Data-store ), Rackspace etc.
  • Rate Limiting – Ability to track and limit a user’s request to an API.
  • SaaS – A smaller software component that you can use or integrate (Google Apps, CiscoWebEx, GoTo
  • Scalable – the ability to have a website or service handle thousands to millions of hits and have baked in a way to handle exponential growth.
  • Meeting).
  • Scale Up – Increase the CPU speed and thus workload
  • Scale Out – Adding more servers and distributing the load instead of making servers faster.
  • SQL – A traditional relational database query language.
  • VR – Virtual Reality is where you totally immerse yourself in a 3D world with a head-mounted display.
  • XaaS – Anything as a service.

External or Online Advice

A consultant once joked to our team that their main job was to “Con” and “Insult” you ( CONinSULTant ).  Their main job is to promote what they know/sell and sow seeds of doubt about what you do. Having said that please take my advice with a grain of salt (I am just relaying what I know/prefer).

Consultants need to rapidly convert you to their way of thinking (and services), consultants gloss over what they don’t know and leave you down a happy part solution nirvana (often ignoring your legacy apps or processes, any roadblocks are relished as an opportunity for more money-making).  This is great if you have endless buckets of money and want to rewrite things over and over.

Having consultants design as develop a solution is not all bad but that would make his developer-focused blog post boring.

Microsoft IIS, Apache, NGINX, Lighthttpd are all good web servers but each has a different memory footprint, performance, and features when delivering static v dynamic content and each platform has maximum concurrent users that they can handle a second for a given server configuration.

You don’t need expensive solutions, read this blog post on “How I built an app with 500,000 users in 5 days on a $100 server”

Snip: I assume my apps will be successful. There’s no point in building an app assuming it won’t be successful. I would not be able to sleep if my app gains traction and then dies due to bad tech. I bake minimum viable scalability principles into my app. It’s the difference between happiness and total panic. It’s what I think should be part of an app MVP (Minimum Viable Product).

Blind googling to find the best platform can be misleading as it is hard to compare apples to apples. Take your time and write some code and evaluate for yourself.

  • This guide highly recommends Microsft.NET and IIS Web servers:  https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/
  • This guide says G-WAN, NGINX and Apache are good http://gwan.com/benchmark

Once you start worrying about scalability you start to plan for multiple servers, load balancing, replication and caching be prepared to open your wallet.

I prefer the free NGINX and if I need more grunt down the track I can move to the NGINX Plus as it has loads of advanced scalability and caching options  https://www.nginx.com/products/.
Alternatively, you can use XaaS for everything and have other people worry about the uptime/scaling and data storage but I find that it is inevitable you will need the flexibility of a self-managed server and FULL control of the core processes.

Golden rule = prove it is cheaper/faster/more reliable and don’t just trust someone. 

Common PaaS, SaaS and Self-Managed Server Vendors

Amazon AWS and Azure are the go to cloud vendors who offer robust and flexible offerings.

Azure: https://azure.microsoft.com/en-us/

Amazon AWS: https://aws.amazon.com/

Google cloud has many cloud offerings but product selection is hard. Prices are high and Google tend to kill off products that don’t make money (e.g Google Gears etc).

Google Cloud:

https://cloud.google.com/

Simple Self Managed Servers

If you want a server in the cloud on the cheap Linode and Digital Ocean have you covered.

  • Digital Ocean: http://www.digitalocean.com
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Linode: https://www.linode.com/

High-End Corporate vendors

  • Rackspace: https://www.rackspace.com/en-au/cloud
  • IBM Cloud: http://www.ibm.com/cloud-computing/au/#infrastructure

Other vendors

  • Engineyard: http://www.engineyard.com/
  • Heroku: https://www.heroku.com/
  • Cloud66: http://www.cloud66.com/
  • Parse: DEAD

Moving to Cloud Pro’s

  • Lowers Risk
  • Outsource talent
  • Scale to millions of users/hits
  • Pay for what you use
  • Granular access
  • Potential savings *
  • Lower risk *

Moving to Cloud Con’s

  • Usually billed in USD
  • Limited upload/downloads or API hits a day
  • Intentional tier pain points (Limited storage, hits, CPU, data transfers, Minimum servers).
  • Cheaper multi-tenant servers v expensive dedicated servers with dedicated support
  • Limited IOPS (g 30 API hits a second then $100 per additional 10 Req/sec)
  • XaaS Price changes
  • Not fully integrated (still need code)
  • Latency between Services.
  • Limited access for developers (not granular enough).
  • Security

Vendors can change their prices whenever they want, I had a cluster of MongoDB servers running on AWS (via http://www.mongodb.com/cloud/ ) and one day they said they needed to increase their prices because they underestimated the costs for the AWS servers. They gave me some credit but I was instantly paying more and was also tied to USD (not AUD). A fall in the Australian dollar will impact bills in a big way.

Vendor Uptime:

Not all vendors are stable, do your research on who are the most reliable: https://cloudharmony.com/status

Quick Status Pages of leading vendors.

  • AWS: https://status.aws.amazon.com/
  • Azure: https://azure.microsoft.com/en-us/status/
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Digital Ocean: https://status.digitalocean.com/
  • Google: https://status.cloud.google.com/
  • Heroku: https://status.heroku.com/
  • LiNode: https://status.linode.com/
  • Cloud66: http://status.cloud66.com/

Some vendors have patchy uptime

consultant_003

Management Software and Support:

Don’t lock in a vendor(s) until you have tested their services and management interfaces and can accurately forecast your future app costs.

I found that Digital Ocean was the simplest to get started, had capped prices and had the best documentation. However, Digital Ocean do not sell advanced services or advanced support and they did not have servers in Australia.

Google Cloud left a lot to be desired with product selection, setup and documentation. It did not take me long to realize I would be paying a lot more on Google platforms.

Azure was quite clean and crisp but lacked controls I was looking for. Azure is designed to be simple with a professional appearance (I found the default security was not high enough for me unmanaged Ubuntu Servers).  Azure was 4x the cost of Digital Ocean servers and 2x the cost of AWS.

AWS management interfaces were very confusing at first but support was not far away online.  AWS seemed to have the most accurate cost estimators and developer tools to make it my default choice.

Free Trials

When searching for a cloud provider to test look for free trials and have a play before you decide what is best.

https://aws.amazon.com/free/ – 12 Month free trial.

https://azure.microsoft.com/en-us/free/ –  $200 credit.

Digital Ocean 2 moths free for new customers.

Cloudant offered $50 free a month for a single multi-tenant NoSQL database but after as IBM acquisition, the costs seem steep (Financing is available through so it must be expensive). I walked away from IBM because it was going to cost me $4,000 a month for 1 dedicated Cloudant CouchDB Node.

Costs

It is hard to forecast your costs if you do not know what components you will use, what the CPU activity will be and what data will be delivered.

Google and AWS have a confusing mix of base rates, CPU credits, and data costs. You can boost your credits and usage but it will cost you compared to a flat rate server cost.

Digital Ocean and Linode offer great low rates for unmanaged servers and reasonable extra charges other vendors will scalp from the get go but lack the global presence.

Azure is a tad more expensive than AWS and a lot higher than Digital Ocean

At some point you need to spin up some servers and play around and if you need to change to another vendor.  I was tempted by IBM Cloud Ant CouchDB DBaaS but it would have been $4000 USD a month. (it did come with 24/7 techs that monitored the service for me).

Databases

Relational databases like MySQL and SQL Server are solid choices but replication can be tricky. See my guide here.

  • NoSQL database are easier to scale up and out but more care has to be given to the software controlling the data and collisions, Relational databases are harder to scale but are by designed to enforce referential integrity.

Design what you need and then chose a Relational, NoSQL or Mix of databases.  A good API will join a mix of databases but deliver the best of both worlds.

E.g Geographic data may best be served from MongoDB but related customer data from MySQL or MS SQL Server

Database cost will also impact your database decisions. E.g Why set up a SQL Server when a MySQL will do, why set up a Mongo DB cluster when a single MongoDB instance will do.

Also when you scale out the database capabilities vary.

  • Availability – Each client can always read and write data.
  • Consistency – All clients have the same view of the data
  • Partition Tolerance – The System works well despite physical network partitions.

nosql-triangle

Database decisions will impact the code and complexity of your application.

Website and API Endpoint

The website will be the glue that sticks all the pieces together.  An API on a web server ( e.g  https://www.myserver.com/api/v1/do/domething/important ) may trigger these actions.

  1. Check the request origin (Ip ban) – Check IP cache or request new IP lookup
  2. Validate SSL status.
  3. Check the users login tokens (are they logged in) – log output
  4. Check a database (MYSQL)
  5. Check for permissions – is this action allowed to happen?
  6. Check for rate-limiting – had a threshold been exceeded.
  7. Check another database (MongoDB)
  8. Prepare data
  9. Resolve the API request – return the data.

A Web server then becomes very important as it is managing lot. If you decided to use a remote “as a service”  ID management API or application endpoint would each of the steps happen in a reasonable time-frame.  StormPath may be great service for IP auth but I had issues with reliability early on and costs were unpredictable, Google firebase is great at Application endpoints but they can be expensive.

Carefully evaluate the pro’s and cons of going DIY/self-managed versus a mix of “as a service” and full “as a service”.

I find that NGINX and NodeJS is the perfect balance between cost, flexibility, and scalability and risk [ link to my scalability guide ] NodeJS is great for integrating MySQL, API or MongoDB calls into the back end in a non-blocking way.  NodeJS can easily integrate caching and connection pooling to enhance throughput.

Mulesoft is a good (but expensive) API development suite https://www.mulesoft.com/platform/api

Location, Latency and Local Networks.

You will want to try and keep all of your servers and services as close as possible, don’t spin up a digital ocean server in Singapore if your customers are in Australia (the NETFLIX effect will see latency drop off a cliff at night). Also having a database on one vendor and a web server on another vendor may add extra latency issues, try and use the same vendor in the same data centre.

Don’t forget SSL will add about 40ms to any request locally (and up to 200ms for overseas serves), that does impact maximum concurrent users (but you need strong SSL).

  • Application scalability on a budget (my journey)
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
  • The quickest way to setup a scalable development ide and web server
  • More here: https://fearby.com/

Also, remember the servers may have performance limitations (maximum IOPS ) sometimes you need to pay for higher IOPS or performance to get better throughput.

Security

Ensure that everything is secure, logged and you have some sort of IP banning or rate-limiting and session tokens/expiry and or auto log out.

Your servers need to be patched and potential exploits monitored, don’t delay updating software like MySQL and OpenSSL when exploits are known.

Consider getting advice from a company like https://www.whitehack.com.au/ where they can review your code and perform penetration testing.

  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Update OpenSSL on a Digital Ocean VM
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

You may want to limit the work you do on authorization management and get a third party to do https://www.okta.com/ or http://www.stormpath.com can help here.

You will certainly need to implement two-factor authentication,  OAuth 2, session tokens, forward security, rate limiting, IP logging, polymorphic data return via API.  Security is a big one.

Here is a benchmark for an API hit overseas with and without SSL

consultant_002-1

Moving my Digital Ocean Server from Singapore to AWS in Australia dropped my API requests to under 200ms (SSL, complete authorization, logging and payload delivery).

Monitoring and Benchmarking

Monitoring your website’s health (CPU, RAM and Memory) along with software and database monitoring is very important to maintain a service.

https://keymetrics.io/ is a great NodeJS service and API monitoring application.

consultant_004

PM2 is a great node module that integrated Key metrics with NodeJS.

CPU BusySiege is a good command-line benchmark took, check out my guide here.

http://www.loader.io is a great service for hitting your website from across the world.

AWS MongoDB Test

End to End Analytics

You should be capturing analytics from end to end (failed logins, invalid packets, user usage etc).  Caching content and blocking bad uses can them be implemented to improve performance.

Developer access

All platforms have varied access to allow developers in to change things.  I prefer the awesome http://www.c9.io for connecting to my servers.

C9 IDE

If you go with high-level SaaS (Microsft CRM, Sitecore CRM etc) you may be locked into outdated software that is hard for developers to modify and support.

Don’t forget your customers.

At this point, you will have a million thoughts on possible solutions and problems but don’t forget to concentrate on what you are developing and it is viable.  Do you have validated customer needs and will you be working to solve those problems?

Project Pre-Mortem

Don’t be afraid to research what could go wrong, are you about to spend money on adding another layer of software to improve something but not solve the problem at hand?

It is a good idea to quickly guess what could go wrong before deciding on a way forward.

  • Server scalability
  • Features not polished
  • Does not meet customer needs
  • Monetization Issues
  • Unknown usage costs
  • Bad advice from consultants
  • Vendors collapsing or being bought out.

Long game

Make sure you choose a vendor that won’t go broke?  Smaller vendors like Parse were gobbled up by Facebook and Facebook closed their doors leaving customers in the lurch.  Even C9.io has been purchased by AWS and their future is uncertain.  Will Linode and Digital Ocean be able to compete against AWS and Azure? Don’t lock yourself into one solution and always have a backup plan.

Do

  • Do know what your goal is.
  • Make a start.
  • Iterate in public.
  • Test everything.

Don’t

  • Don’t trust what you have been told.
  • Don’t develop without a goal.
  • Don’t be attracted to buzzwords, new tech and shiny objects.

Good luck and happy coding.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Edit v1.11

Filed Under: Backup, Business, Cloud, Development, Hosting, Linux, MySQL, NodeJS, Scalability, Scalable, Security, ssl, Uncategorized Tagged With: digital disruption, Innovation

Creating and configuring a CentOS server on Digital Ocean

December 12, 2016 by Simon

Normally I prefer Ubuntu servers but I needed a CentOS Server ( info ) for some software, this guide is a quick guide that lists anything that is different to setting up an Ubuntu server.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Creating a CentOS Server

If you get stuck check out my Ubuntu Digital Ocean guide here or my Ubuntu on AWS guide here.

Login to Digital Ocean (get 2-month free by using my referral link here).

Update 2018: For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Create and deploy a CentOS 7.2 64 bit Server at Digital Ocean.

Reset your root password (so we can log-in via telnet and install node-js before we connect to C9.io).

Log-in to the server with ssh

Setting the timezone is different than Ubuntu.

# Use this command to find your nearest Timezone
timedatectl list-timezones | grep Sydney
> Australia/Sydney

# Set the timezine by running this command
sudo timedatectl set-timezone Australia/Sydney

date

Setup NPT time sync daemon

sudo yum install ntp
sudo systemctl start ntpd
sudo systemctl enable ntpd

Setup a Swap file

sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=xxxxxxx-xxxx-4axxxx63-xxxx-xxxxxxxxxxx
sudo swapon /swapfile
sudo sh -c 'echo "/swapfile none swap sw 0 0" >> /etc/fstab'

Update Packages

yum update && yum upgrade

Install Common Packages

yum install gcc
gcc -version

yum install wget
yum install p7zip

yum install java
java -version

Monitor Open Ports:

yum install nmap

# List open ports
nmap 127.0.01

More info on ports and firewalls here.

Install the Links text based browser

yum install links

links www.google.com

Install Apache

yum install httpd

Install the nano text editor

yum install nano

Configure Apache to listen on port 80: Edit /etc/httpd/conf/httpd.conf and add LISTEN 80

listen 80

Note the location of your http docs root folder (DocumentRoot: “/var/www/html“).

Firewall configuration (more info here)

# enable the firewall 
systemctl enable firewalld
systemctl start firewalld

#firewall status
systemctl status firewalld

#Configure the firewall
firewall-cmd --add-service=http

# Start at system boot
sudo systemctl start firewalld


# firewall status 
systemctl status firewalld

Buy a domain from NameCheap then go to Digital Ocean droplet list and assign the domain name to the droplet (see the guide here).

Summary:

A @ %YOUR_DROPLET_IPV4_ADDRESS%
A www %YOUR_DROPLET_IPV4_ADDRESS%
AAAA @ A @ %YOUR_DROPLET_IPV6_ADDRESS%
CNAME * www.yournewdomainname.com
NS ns1.digitalocean.com
NS ns2.digitalocean.com.
NS ns3.digitalocean.com.

Goto Namecheap and point the domain to custom DNS ( ns1.digitalocean.com,  ns3.digitalocean.com and ns3.digitalocean.com ), remove the placeholder domain just provisioned warning,

On your server (SSH) run the following command

hostnamectl set-hostname --static www.yourdomainname.com

Configure centos Apache to redirect www.yourdomain.com to yourdomain.com (guide here).

Wait or your domain to be available (can you access it via www.yourdomainname.com and yourdomainname.com).

I did add these lines to the following files (/etc/cloud/templates/hosts.redhat.tmpl ) as my www. was not loading (I am not sure if it helped).

# ipv4 config area
127.0.0.1 www.mydomainname.com www

# ipv6 config area
::1 www.mydomainname.com www

Then restarted the network.

service network restart
sudo systemctl status network.service
sudo systemctl restart httpd

I then write an .htaccess redirect to sent all requests to www.mydomainname.com

cd /var/www/html
sudo nano .htaccess

Configure Apache Root Folder

# Redirect www to non-www
# RewriteEngine On
# RewriteBase /
# RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
# RewriteRule ^(.*)$ http://%1/$1 [R=301,L]

# Redirect non-www to www
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]

Now I can view my website 🙂

Time to create a default page.

echo "Hello World" > /var/www/html/index.html

Installing PHP

yum install php
systemctl restart httpd

Install MariaDB

yum install mariadb-server mariadb
systemctl start mariadb.service
systemctl enable mariadb.service
firewall-cmd --add-service=mysql

Configure MariaDB

/usr/bin/mysql_secure_installation

Install an FTP Server

yum install vsftpd

Todo: Add local user…

Install nodejs

yum install -y gcc-c++ make
curl -sL https://rpm.nodesource.com/setup_6.x | sudo -E bash -
yum instal nodejs

Connect the server to c9.io

Now the server is ready to connect to c9.io.

Security

As a precaution do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.

Firewall was blocking connections after reboot

firewall-cmd --add-service=http --zone=public --permanent

Install SSL

todo…

Lock down the server (remove the root account etc)

Todo: Remove root logins, secure mariadb etc

todo..

Thanks to the following guides that helped me:

http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/2/

https://www.digitalocean.com/community/tutorials/initial-server-setup-with-centos-7

https://www.digitalocean.com/community/tutorials/additional-recommended-steps-for-new-centos-7-servers

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Domain, Hosting, Scalability, Security, ssl

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT