• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

CDN

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.

December 22, 2020 by Simon

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance. Here is what I did to set up a complete Ubuntu 18.04 system (NGINX, PHP, MySQL, WordPress etc). This is not a paid review (just me documenting my steps over 2 days).

Background (CPanel hosts)

In 1999 I hosted my first domain (www.fearby.com) on a host in Seattle (for $10 USD a month), the host used CPanel and all was good.  After a decade I was using the domain more for online development and the website was now too slow (I think I was on dial-up or ADSL 1 at the time). I moved my domain to an Australian host (for $25 a month).

After 8 years the domain host was sold and performance remained mediocre. After another year the new host was sold again and performance was terrible.

I started receiving Resource Limit Is Reached warnings (basically this was a plot by the new CPanel host to say “Pay us more and this message will go away”).

Page load times were near 30 seconds.

cpenal_usage_exceeded

The straw that broke the camel’s back was their demand of $150/year for a dodgy SSL certificate.

I needed to move to a self-managed server where I was in control.

Buying a Domain Name

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Self Managed Server

I found a good web IDE ( http://www.c9.io/ ) that allowed me to connect to a cloud VM.  C9 allowed me to open many files and terminal windows and reconnect to them later. Don’t get excited, though, as AWS has purchased C9 and it’s not the same.

C9 IDE

C9 IDE

I spun up a Digital Ocean Server at the closest data centre in Singapore. Here was my setup guide creating a Digital Ocean VM, connecting to it with C9 and configuring it. I moved my email to G Suite and moved my WordPress to Digital Ocean (other guides here and here).

I was happy since I could now send emails via CLI/code, set up free SSL certs, add second domain email to G Suite and Secure G Suite. No more usage limit errors either.

Self-managing servers require more work but it is more rewarding (flexible, faster and cheaper).  Page load times were now near 20 seconds (10-second improvement).

Latency Issue

Over 6 months, performance on Digital Ocean (in Singapore) from Australia started to drop (mentioned here).  I tried upgrading the memory but that did not help (latency was king).

Moved the website to Australia

I moved my domain to Vultr in Australia (guide here and here). All was good for a year until traffic growth started to increase.

Blog Growth

I tried upgrading the memory on Vultr and I setup PHP child workers, set up Cloudflare.

GT Metrix scores were about a “B” and Google Page Speed Scores were in the lower 40’s. Page loads were about 14 seconds (5-second improvement).

Tweaking WordPress

I set up an image compression plugin in WordPress then set up a cloud image compression and CDN Plugin from the same vendor.  Page Speed info here.

GT Metrix scores were now occasionally an “A” and Page Speed scores were in the lower 20’s. Page loads were about 3-5 seconds (10-second improvement).

A mixed bag from Vultr (more optimisation and performance improvements were needed).

This screenshot is showing poor www.gtmetrix.com scores , pool google page speed index scores and upgrading from 1GB to 2GB memory on my server.

Google Chrome Developer Console Audit Results on Vultr hosted website were not very good (I stopped checking as nothing helped).

This is a screenshot showing poor site performance (screenshot taken in Google Dev tools audit feature)

The problem was the Vultr server (400km away in Sydney) was offline (my issue) and everything above (adding more memory, adding 2x CDN’s (EWWW and Cloudflare), adding PHP Child workers etc) did not seem to help???

Enter UpCloud…

Recently, a friend sent a link to a blog article about a host called “UpCloud” who promised “Faster than SSD” performance.  This can’t be right: “Faster than SSD”? I was intrigued. I wanted to check it out as I thought nothing was faster than SSD (well, maybe RAM).

I signed up for a trial and ran a disk IO test (read the review here) and I was shocked. It’s fast. Very fast.

Summary: UpCloud was twice as fast (Disk IO and CPU) as Vultr (+ an optional $4/m firewall and $3/m for 1x backup).

This is a screenshot showing Vultr.com servers getting half the read and write disk io performance compared to upcloud.com.

fyi: Labels above are K Bytes per second. iozone loops through all file size from 4 KB to 16,348 KB and measures through the reads per second. To be honest, the meaning of the numbers doesn’t interest me, I just want to compare apples to apples.

This is am image showing iozone results breakdown chart (kbytes per sec on vertical axis, file size in horizontal axis and transfer size on third access)

(image snip from http://www.iozone.org/ which explains the numbers)

I might have to copy my website on UpCloud and see how fast it is.

Where to Deploy and Pricing

UpCloud Pricing: https://www.upcloud.com/pricing/

UpCloud Pricing

UpCloud does not have a data centre in Australia yet so why choose UpCloud?

Most of my site’s visitors are based in the US and UpCloud have disk IO twice as fast as Vultr (win-win?).  I could deploy to Chicago?

This image sows most of my visitors are in the US

My site’s traffic is growing and I need to ensure the site is fast enough in the future.

This image shows that most of my sites visitors are hitting my site on week days.

Creating an UpCloud VM

I used a friend’s referral code and signed up to create my first VM.

FYI: use my Referral code and get $25 free credit.  Sign up only takes 2 minutes.

https://www.upcloud.com/register/?promo=D84793

When you click the link above you will receive 25$ to try out serves for 3 days. You can exit his trail and deposit $10 into UpCloud.

Trial Limitations

The trial mode restrictions are as following:

* Cloud servers can only be accessed using SSH, RDP, HTTP or HTTPS protocols
* Cloud servers are not allowed to send outgoing e-mails or to create outbound SSH/RDP connections
* The internet connection is restricted to 100 Mbps (compared to 500 Mbps for non-trial accounts)
* After your 72 hours free trial, your services will be deleted unless you make a one-time deposit of $10

UpCloud Links

The UpCloud support page is located here: https://www.upcloud.com/support/

  • Quick start: Introduction to UpCloud
  • How to deploy a Cloud Server
  • Deploy a cloud server with UpCloud’s API

More UpCloud links to read:

  • Two-Factor Authentication on UpCloud
  • Floating IPs on UpCloud
  • How to manage your firewall
  • Finalizing deployment

Signing up to UpCloud

Navigate to https://upcloud.com/signup and add your username, password and email address and click signup.

New UpCloud Signup Page

Add your address and payment details and click proceed (you don’t need to pay anything ($1 may be charged and instantly refunded to verify the card)

Add address and payment details

That’s it, check yout email.

Signup Done

Look for the UpCloud email and click https://my.upcloud.com/

Check Email

Now login

Login to UpCloud

Now I can see a dashboard 🙂

UpCloud Dashboard

I was happy to see 24/7 support is available.

This image shows the www.upcloud.com live chat

I opted in for the new dashboard

UpCloud new new dashboard

Deploy My First UpCloud Server

This is how I deployed a server.

Note: If you are going to deploy a server consider using my referral code and get $25 credit for free.

Under the “deploy a server” widget I named the server and chose a location (I think I was supposed to use an FQDN name -e.g., “fearby.com”). The deployment worked though. I clicked continue, then more options were made available:

  1. Enter a short server description.
  2. Choose a location (Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago)
  3. Choose the number of CPU’s and amount of memory
  4. Specify disk number/names and type (MaxIOPS or HDD).
  5. Choose an Operating System
  6. Select a Timezone
  7. Define SSH Keys for access
  8. Allowed login methods
  9. Choose hardware adapter types
  10. Where the send the login password

Deploy Server

FYI: How to generate a new SSH Key (on OSX or Ubuntu)

ssh-keygen -t rsa

Output

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /temp/example_rsa
Enter passphrase (empty for no passphrase): *********************************
Enter same passphrase again:*********************************
Your identification has been saved in /temp/example_rsa.
Your public key has been saved in /temp/example_rsa.pub.
The key fingerprint is:
SHA256:########################### [email protected]
Outputted public and private key

Did the key export? (yes)

> /temp# ls /temp/ -al
> drwxr-xr-x 2 root root 4096 Jun 9 15:33 .
> drwxr-xr-x 27 root root 4096 Jun 8 14:25 ..
> -rw——- 1 user user 1766 Jun 9 15:33 example_rsa
> -rw-r–r– 1 user user 396 Jun 9 15:33 example_rsa.pub

“example_rsa” is the private key and “example_rsa.pub “is the public key.

  • The public key needs to be added to the server to allow access.
  • The private key needs to be added to any local ssh program used for remote access.

Initialisation script (after deployment)

I was pleased to see an initialization script section that calls actions after the server is deployed. I configured the initialisation script to pull down a few GB of backups from my Vultr website in Sydney (files now removed).

This was my Initialisation script:

#!/bin/bash
echo "Downloading the Vultr websites backups"
mkdir /backup
cd /backup
wget -o www-mysql-backup.sql https://fearby.com/.../www-mysql-backup.sql
wget -o www-blog-backup.zip https://fearby.com/.../www-blog-backup.zip

Confirm and Deploy

I clicked “Confirm and deploy” but I had an alert that said trial mode can only deploy servers up to 1024MB of memory.

This image shows I cant deploy servers with 2/GB in trial modeExiting UpCloud Trial Mode

I opened the dashboard and clicked My Account then Billing, I could see the $25 referral credit but I guess I can’t use that in Trial.

I exited trial mode by depositing $10 (USD).

View Billing Details

Make a manual 1-time deposit of $10 to exit trial mode.

Deposit $10 to exit the trial

FYI: Server prices are listed below (or view prices here).

UpCloud Pricing

Now I can go back and deploy the server with the same settings above (1x CPU, 2GB Memory, Ubuntu 18.04, MaxIOPS Storage etc)

Deployment takes a few minutes and depending on how you specified a password may be emailed to you.

UpCloud Server Deployed

The server is now deployed; now I can connect to it with my SSH program (vSSH).  Simply add the server’s IP, username, password and the SSH private key (generated above) to your ssh program of choice.

fyi: The public key contents start with “ssh-rsa”.

This image shows me connecting to my sever via ssh

I noticed that the initialisation script downloaded my 2+GB of files already. Nice.

UpCloud Billing Breakdown

I can now see on the UpCloud billing page in my dashboard that credit is deducted daily (68c); at this rate, I have 49 days credit left?

Billing Breakdown

I can manually deposit funds or set up automatic payments at any time 🙂

UpCloud Backup Options

You do not need to setup backups but in case you want to roll back (if things stuff up), it is a good idea. Backups are an additional charge.

I have set up automatic daily backups with an auto deletion after 2 days

To view backup scheduled click on your deployed server then click backup

List of UpCloud Backups

Note: Backups are charged at $0.056 for every GB stored – so $5.60 for every 100GB per month (half that for 50GB etc)

You can take manual backups at any time (and only be charged for the hour)

UpCloud Firewall Options

I set up a firewall at UpCloud to only allow the minimum number of ports (UpCloud DNS, HTTP, HTTPS and My IP to port 22).  The firewall feature is charged at $0.0056 an hour ($4.03 a month)

I love the ability to set firewall rules on incoming, destination and outgoing ports.

To view your firewall click on your deployed server then click firewall

UpCloud firewall

Update: I modified my firewall to allow inbound ICMP (IPv4/IPv6) and UDP (IPv4/IPv6) packets.

(Note: Old firewall screenshot)

Firewall Rules Allow port 80, 443 and DNS

Because my internet provider has a dynamic IP, I set up a VPN with a static IP and whitelisted it for backdoor access.

Local Ubuntu ufw Firewall

I duplicated the rules in my local ufw (2nd level) firewall (and blocked mail)

sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] 25                         DENY OUT    Anywhere                   (out)
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 22                         ALLOW IN    REMOVED (MY WHITELISTED IP))
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 25 (v6)                    DENY OUT    Anywhere (v6)              (out)
[10] 53                         ALLOW IN    2a04:3540:53::1
[11] 53                         ALLOW IN    2a04:3544:53::1

UpCloud Download Speeds

I pulled down a 1.8GB Ubuntu 18.08 Desktop ISO 3 times from gigenet.com and the file downloaded in 32 seconds (57MB/sec). Nice.

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:04-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso'

ubuntu-18.04-desktop-amd64.iso 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:02:37 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:46-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.1'

ubuntu-18.04-desktop-amd64.iso.1 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:19 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.1' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:03:23-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.2'

ubuntu-18.04-desktop-amd64.iso.2 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:56 (56.8 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.2' saved [1921843200/1921843200]

Install Common Ubuntu Packages

I installed common Ubuntu packages.

apt-get install zip htop ifstat iftop bmon tcptrack ethstatus speedometer iozone3 bonnie++ sysbench siege tree tree unzip jq jq ncdu pydf ntp rcconf ufw iperf nmap iozone3

Timezone

I checked the server’s time (I thought this was auto set before I deployed)?

$hwclock --show
2018-06-06 23:52:53.639378+0000

I reset the time to Australia/Sydney.

dpkg-reconfigure tzdata
Current default time zone: 'Australia/Sydney'
Local time is now: Thu Jun 7 06:53:20 AEST 2018.
Universal Time is now: Wed Jun 6 20:53:20 UTC 2018.

Now the timezone is set 🙂

Shell History

I increased the shell history.

HISTSIZEH =10000
HISTCONTROL=ignoredups

SSH Login

I created a ~/.ssh/authorized_keys file and added my SSH public key to allow password-less logins.

mkdir ~/.ssh
sudo nano ~/.ssh/authorized_keys

I added my pubic ssh key, then exited the ssh session and logged back in. I can now log in without a password.

Install NGINX

apt-get install nginx

nginx/1.14.0 is now installed.

A quick GT Metrix test.

This image shows awesome static nginx performance ratings of of 99%

Install MySQL

Run these commands to install and secure MySQL.

apt install mysql-server
mysql_secure_installation

Securing the MySQL server deployment.
> Would you like to setup VALIDATE PASSWORD plugin?: n
> New password: **********************************************
> Re-enter new password: **********************************************
> Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
> Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
> Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
> Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
> Success.

I disabled the validate password plugin because I hate it.

MySQL Ver 14.14 Distrib 5.7.22 is now installed.

Set MySQL root login password type

Set MySQL root user to authenticate via “mysql_native_password”. Run the “mysql” command.

mysql
SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | | auth_socket | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+----------

Now let’s set the root password authentication method to “mysql_native_password”

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '*****************************************';
Query OK, 0 rows affected (0.00 sec)

Check authentication method.

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | ######################################### | mysql_native_password | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+

Now we need to flush permissions.

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Done.

Install PHP

Install PHP 7.2

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php7.2
php -v

PHP 7.2.5, Zend Engine v3.2.0 with Zend OPcache v7.2.5-1 is now installed. Do update PHP frequently.

I made the following changes in /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Install PHP FPM

apt-get install php7.2-fpm

Configure PHP FPM config.

Edit /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Reload php sudo service.

php7.2-fpm restart service php7.2-fpm status

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Configuring NGINX

If you are not comfortable editing NGINX config files read here, here and here.

I made a new “www root” folder, set permissions and created a default html file.

mkdir /www-root
chown -R www-data:www-data /www-root
echo "Hello World" >> /www-root/index.html

I edited the “root” key in “/etc/nginx/sites-enabled/default” file and set the root a new location (e.g., “/www-root”)

I added these performance tweaks to /etc/nginx/nginx.conf

> worker_cpu_affinity auto;
> worker_rlimit_nofile 100000

I add the following lines to “http {” section in /etc/nginx/nginx.conf

client_max_body_size 10M;

gzip on;
gzip_disable "msie6";
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_types
application/atom+xml
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
font/opentype
image/bmp
image/x-icon
text/cache-manifest
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
#text/html is always compressed by gzip module

gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss te$

Check NGINX Status

service nginx status
* nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 21:16:28 AEST; 30min ago
Docs: man:nginx(8)
Main PID: # (nginx)
Tasks: 2 (limit: 2322)
CGroup: /system.slice/nginx.service
|- # nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
`- # nginx: worker process

Install Open SSL that supports TLS 1.3

This is a work in progress. The steps work just fine for me on Ubuntu 16.04. but not Ubuntu 18.04.?

Installing Adminer MySQL GUI

I will use the PHP based Adminer MySQL GUI to export and import my blog from one server to another. All I needed to do is install it on both servers (simple 1 file download)

cd /utils
wget -o adminer.php https://github.com/vrana/adminer/releases/download/v4.6.2/adminer-4.6.2-mysql-en.php

Use Adminer to Export My Blog (on Vultr)

On the original server open Adminer (http) and..

  1. Login with the MySQL root account
  2. Open your database
  3. Choose “Save” as the output
  4. Click on Export

This image shows the export of the wordpress adminer page

Save the “.sql” file.

I used Adminer on the UpCloud server to Import My Blog

FYI: Depending on the size of your database backup you may need to temporarily increase your upload and post sizes limits in PHP and NGINX before you can import your database.

Edit /etc/php/7.2/fpm/php.ini
> max_file_uploads = 100M
> post_max_size =100M

And Edit: /etc/nginx/nginx.conf
> client_max_body_size 100M;

Don’t forget to reload NGINX config and restart NGINX and PHP. Take note of the maximum allowed file size in the screenshot below. I temporarily increased my upload limits to 100MB in order to restore my 87MB blog.

Now I could open Adminer on my UpCloud server.

  1. Create a new database
  2. Click on the database and click Import
  3. Choose the SQL file
  4. Click Execute to import it

Import MuSQL backup with Adminer

Don’t forget to create a user and assign permissions (as required – check your wp-config.php file).

Import MySQL Database

Tip: Don’t forget to lower the maximum upload file size and max post size after you import your database,

Cloudflare DNS

I use Cloudflare to manage DNS, so I need to tell it about my new server.

You can get your server’s IP details from the UpCloud dashboard.

Find IP

At Cloudflare update your DNS details to point to the server’s new IPv4 (“A Record”) and IPv6 (“AAAA Record”).

Cloudflare DNS

Domain Error

I waited an hour and my website was suddenly unavailable.  At first, I thought this was Cloudflare forcing the redirection of my domain to HTTP (that was not yet set up).

DNS Not Replicated Yet

I chatted with UpCloud chat on their webpage and they kindly assisted me to diagnose all the common issues like DNS values, DNS replication, Cloudflare settings and the error was pinpointed to my NGINX installation.  All NGINX config settings were ok from what we could see?  I uninstalled NGINX and reinstalled it (and that fixed it). Thanks UpCloud Support 🙂

Reinstalled NGINX

sudo apt-get purge nginx nginx-common

I reinstalled NGINX and reconfigured /etc/nginx/nginx.conf (I downloaded my SSL cert from my old server just in case).

Here is my /etc/nginx/nginx.conf file.

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/www-nginxcriterror.log crit;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        client_max_body_size 10M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        server_tokens off;

        server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/www-access.log;
        error_log /var/log/nginx/www-error.log;

        gzip on;

        gzip_vary on;
        gzip_disable "msie6";
        gzip_min_length 256;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-available/default file (fyi, I have not fully re-setup TLS 1.3 yet so I commented out the settings)

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-root;

        # Listen Ports
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name www.fearby.com fearby.com localhost;

        # HTTPS Cert
        ssl_certificate /etc/nginx/ssl-cert-path/fearby.crt;
        ssl_certificate_key /etc/nginx/ssl-cert-path/fearby.key;
        ssl_dhparam /etc/nginx/ssl-cert-path/dhparams4096.pem;

        # HTTPS Ciphers
        
        # TLS 1.2
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

        # TLS 1.3			#todo
        # ssl_ciphers 
        # ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DES-CBC3-SHA;
        # ssl_ecdh_curve secp384r1;

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # HTTPS Settings
        server_tokens off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 30m;
        ssl_session_tickets off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
	#ssl_stapling on; 						# Requires nginx >= 1.3.7

        # Cloudflare DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;

        # PHP Memory 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
            try_files $uri =404;
            # include snippets/fastcgi-php.conf;

            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php7.2-fpm.sock;

            # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
            # fastcgi_pass 127.0.0.1:9000;
	    }

        location / {
            # try_files $uri $uri/ =404;
            try_files $uri $uri/ /index.php?q=$uri&$args;
            index index.php index.html index.htm;
            proxy_set_header Proxy "";
        }

        # Deny Rules
        location ~ /.ht {
                deny all;
        }
        location ~ ^/.user.ini {
            deny all;
        }
        location ~ (.ini) {
            return 403;
        }

        # Headers
        location ~* .(?:ico|css|js|gif|jpe?g|png|js)$ {
            expires 30d;
            add_header Pragma public;
            add_header Cache-Control "public";
        }

}

SSL Labs SSL Certificate Check

All good thanks to the config above.

SSL Labs

Install WP-CLI

I don’t like setting up FTP to auto-update WordPress plugins. I use the WP-CLI tool to manage WordPress installations by the command line. Read my blog here on using WP-CLI.

Download WP-CLI

mkdir /utils
cd /utils
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Move WP-CLI to the bin folder as “wp”

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Test wp

wp --info
OS: Linux 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
Shell: /bin/bash
PHP binary: /usr/bin/php7.2
PHP version: 7.2.5-1+ubuntu18.04.1+deb.sury.org+1
php.ini used: /etc/php/7.2/cli/php.ini
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /www-root
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.1

Update WordPress Plugins

Now I can run “wp plugin update” to update all WordPress plugins

wp plugin update
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/plugin/wordfence.7.1.7.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wp-meta-seo.3.7.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wordpress-seo.7.6.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Disabling Maintenance mode...
Success: Updated 3 of 3 plugins.
+---------------+-------------+-------------+---------+
| name | old_version | new_version | status |
+---------------+-------------+-------------+---------+
| wordfence | 7.1.6 | 7.1.7 | Updated |
| wp-meta-seo | 3.7.0 | 3.7.1 | Updated |
| wordpress-seo | 7.5.3 | 7.6.1 | Updated |
+---------------+-------------+-------------+---------+

Update WordPress Core

WordPress core file can be updated with “wp core update“

wp core update
Success: WordPress is up to date.

Troubleshooting: Use the flag “–allow-root “if wp needs higher access (unsafe action though).

Install PHP Child Workers

I edited the following file to setup PHP child workers /etc/php/7.2/fpm/pool.d/www.conf

Changes

> pm = dynamic
> pm.max_children = 40
> pm.start_servers = 15
> pm.min_spare_servers = 5
> pm.max_spare_servers = 15
> pm.process_idle_timeout = 30s;
> pm.max_requests = 500;
> php_admin_value[error_log] = /var/log/www-fpm-php.www.log
> php_admin_value[memory_limit] = 512M

Restart PHP

sudo service php7.2-fpm restart

Test NGINX config, reload NGINX config and restart NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Output (14 workers are ready)

Check PHP Child Worker Status

sudo service php7.2-fpm status
* php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 19:32:47 AEST; 20s ago
Docs: man:php-fpm7.2(8)
Main PID: # (php-fpm7.2)
Status: "Processes active: 0, idle: 15, Requests: 2, slow: 0, Traffic: 0.1req/sec"
Tasks: 16 (limit: 2322)
CGroup: /system.slice/php7.2-fpm.service
|- # php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
- # php-fpm: pool www

Memory Tweak (set at your own risk)

sudo nano /etc/sysctl.conf

vm.swappiness = 1

Setting swappiness to a value of 1 all but disables the swap file and tells the Operating System to aggressively use ram, a value of 10 is safer. Only set this if you have enough memory available (and free).

Possible swappiness settings:

> vm.swappiness = 0 Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition when free memory will be below vm.min_free_kbytes limit, but in later versions, this is achieved by setting to 1.[2]> vm.swappiness = 1 Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
> vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system.[3]
> vm.swappiness = 60 The default value.
> vm.swappiness = 100 The kernel will swap aggressively.

The “htop” tool is a handy memory monitoring tool to “top”

Also, you can use good old “watch” command to show near-live memory usage (auto-refreshes every 2 seconds)

watch -n 2 free -m

Script to auto-clear the memory/cache

As a habit, I am setting up a cronjob to check when free memory falls below 100MB, then the cache is automatically cleared (freeing memory).

Script Contents: clearcache.sh

#!/bin/bash

# Script help inspired by https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
ram_use=$(free -m)
IFS=

I set the cronjob to run every 15 mins, I added this to my cronjob.

SHELL=/bin/bash
*/15  *  *  *  *  root /bin/bash /scripts/clearcache.sh >> /scripts/clearcache.log

Sample log output

2018-06-10 01:13:22 RAM OK (Total: 1993 MB, Used: 981 MB, Free: 387 MB)
2018-06-10 01:15:01 RAM OK (Total: 1993 MB, Used: 974 MB, Free: 394 MB)
2018-06-10 01:20:01 RAM OK (Total: 1993 MB, Used: 955 MB, Free: 412 MB)
2018-06-10 01:25:01 RAM OK (Total: 1993 MB, Used: 1002 MB, Free: 363 MB)
2018-06-10 01:30:01 RAM OK (Total: 1993 MB, Used: 970 MB, Free: 394 MB)
2018-06-10 01:35:01 RAM OK (Total: 1993 MB, Used: 963 MB, Free: 400 MB)
2018-06-10 01:40:01 RAM OK (Total: 1993 MB, Used: 976 MB, Free: 387 MB)
2018-06-10 01:45:01 RAM OK (Total: 1993 MB, Used: 985 MB, Free: 377 MB)
2018-06-10 01:50:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 379 MB)
2018-06-10 01:55:01 RAM OK (Total: 1993 MB, Used: 979 MB, Free: 382 MB)
2018-06-10 02:00:01 RAM OK (Total: 1993 MB, Used: 980 MB, Free: 380 MB)
2018-06-10 02:05:01 RAM OK (Total: 1993 MB, Used: 971 MB, Free: 389 MB)
2018-06-10 02:10:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 376 MB)
2018-06-10 02:15:01 RAM OK (Total: 1993 MB, Used: 967 MB, Free: 392 MB)

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

sudo cat /proc/sys/vm/swappiness
1

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may lose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

tune2fs -o journal_data_writeback /dev/vda1

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

sudo nano /etc/fstab

I added “writeback,noatime,nodiratime” flags to my disk options.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options> <dump>  <pass>
# / was on /dev/vda1 during installation
#                <device>                 <dir>           <fs>    <options>                                             <dump>  <fsck>
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro,data=writeback,noatime,nodiratime   0       1

Updating Ubuntu Packages

Show updatable packages.

apt-get -s dist-upgrade | grep "^Inst"

Update Packages.

sudo apt-get update && sudo apt-get upgrade

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

sudo apt-get install unattended-upgrades

Enable Unattended Upgrades.

sudo dpkg-reconfigure --priority=low unattended-upgrades

Now I configure what packages not to auto-update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

Unattended-Upgrade::Package-Blacklist {
"nginx";
"nginx-common";
"nginx-core";
"php7.2";
"php7.2-fpm";
"mysql-server";
"mysql-server-5.7";
"mysql-server-core-5.7";
"libssl1.0.0";
"libssl1.1";
};

FYI: You can find installed packages by running this command:

apt list --installed

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

unattended-upgrade -d

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows creating democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepaid model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user-defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

Disable dasjhicons.min.css (for unauthenticated WordPress users).

Find functions.php in the www root

sudo find . -print |grep  functions.php

Edit functions.php

sudo nano ./wp-includes/functions.php

Add the following

// Remove dashicons in frontend for unauthenticated users
add_action( 'wp_enqueue_scripts', 'bs_dequeue_dashicons' );
function bs_dequeue_dashicons() {
    if ( ! is_user_logged_in() ) {
        wp_deregister_style( 'dashicons' );
    }
}

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers

server {
        root /www;

        ...
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;
        ...

I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

location = /http2/push_demo.html {
        http2_push /http2/pushed.css;
        http2_push /http2/pushedimage1.jpg;
        http2_push /http2/pushedimage2.jpg;
        http2_push /http2/pushedimage3.jpg;
}

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

location / {
        ...
        http2_push /https://fearby.com/wp-includes/js/jquery/jquery.js;
        http2_push /wp-content/themes/news-pro/images/favicon.ico;
        ...
}

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

WordPress wp-config.php tweaks

# Memory
define('WP_MEMORY_LIMIT','1024M');
define('WP_MAX_MEMORY_LIMIT','1024M');
set_time_limit (60);

# Security
define( 'FORCE_SSL_ADMIN', true);

# Disable Updates
define( 'WP_AUTO_UPDATE_CORE', false );
define( 'AUTOMATIC_UPDATER_DISABLED', true );

# ewww.io
define( 'WP_AUTO_UPDATE_CORE', false );

Add 2FA Authentication to server logins.

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress too.

Tweaks Todo

  • Compress placeholder BJ Lazy Load Image (plugin is broken)
  • Solve 2x Google Analytics tracker redirects (done, switched to Matomo)

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

Results

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

n' read -rd '' -a ram_use_arr <<< "$ram_use" ram_use="${ram_use_arr[1]}" ram_use=$(echo "$ram_use" | tr -s " ") IFS=' ' read -ra ram_use_arr <<< "$ram_use" ram_total="${ram_use_arr[1]}" ram_used="${ram_use_arr[2]}" ram_free="${ram_use_arr[3]}" d=`date '+%Y-%m-%d %H:%M:%S'` if ! [[ "$ram_free" =~ ^[0-9]+$ ]]; then echo "Sorry ram_free is not an integer" else if [ "$ram_free" -lt "100" ]; then echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB) - Clearing Cache..." sync; echo 1 > /proc/sys/vm/drop_caches sync; echo 2 > /proc/sys/vm/drop_caches #sync; echo 3 > /proc/sys/vm/drop_caches #Not advised in production # Read for more info https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ exit 1 else if [ "$ram_free" -lt "256" ]; then echo "$d RAM ALMOST LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else if [ "$ram_free" -lt "512" ]; then echo "$d RAM OK (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 fi fi fi fi

I set the cronjob to run every 15 mins, I added this to my cronjob.

 

Sample log output

 

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

 

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

 

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may loose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

 

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

 

I added “writeback,noatime,nodiratime” flags to my disk options.

 

Updating Ubuntu Packages

Show updatable packages.

 

Update Packages.

 

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

 

Enable Unattended Upgrades.

 

Now I configure what packages not to auto update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

 

FYI: You can find installed packages by running this command:

 

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

 

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows to create democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepay model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

2FA Authentication at login

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress aswel.

Performance

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

Results

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

I hope this guide helps someone.

Free Credit

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.2 Converting to Blocks

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

Filed Under: CDN, Cloud, Cloudflare, Cost, CPanel, Digital Ocean, DNS, Domain, ExactDN, Firewall, Hosting, HTTPS, MySQL, MySQLGUI, NGINX, Performance, PHP, php72, Scalability, TLS, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: draft, GTetrix, host, IOPS, Load Time, maxIOPS, MySQL, nginx, Page Speed Insights, Performance, php, SSD, ubuntu, UpCloud, vm

I thought my website was hacked. Here is how I hardened my Linux servers security with Lynis Enterprise

October 24, 2020 by Simon

Disclaimer

I have waited a year before posting this, and I have tried my best to hide the bank’s identity as I never got a good explanation back from them about they the were whitelisting my website.

Background

I was casually reading Twitter one evening and found references to an awesome service (https://publicwww.com/) that allows you to find string references in CSS, JS, CSP etc files on websites.

Search engine that searches the web for the source code of the sites, not the content of them: https://t.co/G7oYQZ4Cbp

— @mikko (@mikko) March 8, 2018

https://t.co/DUyxFD4QbV is one of my new favorite search tools. Finally I can search for html/css/js and see which websites are using it. Really powerful when you think of the right searches…

— Allan Thraen (@athraen) April 26, 2019

See how people are using the publicwww service on Twitter.

I searched https://publicwww.com/ for “https://fearby.com“. I was expecting to only see only resources that were loading from my site.

I was shocked to see a bank in Asia was whistling my website and my websites CDN (hosted via ewww.io) in it’s Content Security Policy.

Screenshot of publicwww.com scan of "fearby.com

I was not hosting content for a bank and they should not be whitelisting my site?

Were they hacked? Was I hacked and delivering malware to their customers? Setting up a Content Security Policy (CSP) is not a trivial thing to do and I would suggest you check out https://report-uri.com/products/content_security_policy (by Scott Helme) for more information on setting up a good Content Security Policy (CSP).

Were we both hacked or was I serving malicious content?

Hacked Koala meme

I have written a few blog posts on creating Content Security Policies, and maybe they did copy my starter Content Security Policy and added it to their site?

I do have a lot of blog readers from their country.

Analytics map of Asia

I went to https://www.securityheaders.com and scanned their site and yes they have whitelisted my website and CDN. This was being sent in a header from their server to any connecting client.

I quickly double-checked the banks Content Security Policy (CSP) with https://cspvalidator.org/ and they too confirmed the bank was telling their customers that my website was ok to load files from.

I would not be worried if a florist’s website had white-listed my website but a bank that has 250 physical branches, 2,500 employees in a country that has 29 million people.

Below is the banks Content Security Policy.

https://cspvalidator.org/ screenshot of the banks csp

I thought I had been hacked into so I downloaded my Nginx log files (with MobaXTerm,) and scanned them for hits to my site from their website.

Screenshot of a years nginx logs.

After I scanned the logs I could see that I had zero traffic from their website

I sent a direct message to Scott Helme on Twitter (CSP Guru) and he replied with advice on the CSP.

Blocking Traffic

As a precaution, I edited my /etc/nginx/sites-available/default file and added this to block all traffic from their site.

if ($http_referer ~* "##########\.com") {
        return 404;
}

I tested and reloaded my Nginx config and restarted my web server

nginx -t
nginx -s reload
/etc/init.d/nginx restart

I also emailed my website CDN’s admin at https://ewww.io/ and asked them to block traffic from the bank as a precaution. They responded quickly as said this was done and they enabled extra logging in case more information was needed data.

If you need a good and fast WordPress Content Delivery Network (CDN) check out https://ewww.io/. They are awesome. Read my old review of ewww.io here.

I contacted the Bank

I searched the bank’s website for a way to contact them, their website was slow, their contact page was limited, they have a chat feature but I needed to log in with FaceBook (I don’t use FaceBook)

I viewed their contact us web page and they had zero dedicated security contacts listed. The CIO was only contactable via phone only.

They did not have a security.txt file on their website.

http://www.bankdomain.com/.well-known/security.txt file not found

TIP: If you run a website, please consider creating a security.txt file, information here.

I then viewed their contact us page and emailed everyone I could.

I asked if they could..

  • Check their logs for malicious files loaded from my site
  • Please remove the references to my website and CDN from their CSP.
  • Hinted they may want to review your CI/CD logs to see why this happened

My Server Hardening (to date)

My website was already hardened but was my site compromised?

Hardening actions to date..

  • Using a VPS firewall, Linux firewall 2x software firewalls
  • I have used the free Lynis Scan
  • Whitelisting access to port 22 to limited IP’s
  • Using hardware 2FA keys on SSH and WordPress Logins
  • Using the WordFence Security Plugin
  • Locked down unwanted ports.
  • I had a strong HTTPS certificate and website configuration (test here)
  • I have set up appropriate security headers (test here). I did need to re-setup a Content Security Policy (keep reading)
  • Performed many actions (some blogged a while ago) here: https://fearby.com/article/securing-ubuntu-cloud/
  • etc

I had used the free version of Lynis before but now is the time to use the Lynis Enterprise.

A free version of Lynis can be installed from Github here: https://github.com/CISOfy/lynis/

What is Lynis Enterprise?

Lynis Enterprise software is commercial auditing, system hardening, compliance testing tool for AIX, FreeBSD, HP-UX, Linux, macOS, NetBSD, OpenBSD, Solaris etc. The Enterprise version is a paid version (with web portal). Lynis Enterprise has more features over the free version.

Snip from here: “Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating system. It performs an extensive health scan of your systems to support system hardening and compliance testing. The project is open-source software with the GPL license and available since 2007.”

Visit the Lynis Enterprise site here: https://cisofy.com/solutions/#lynis-enterprise.

I created a Lynis Enterprise Trial

I have used the free version of Lynis in the past (read here), but the Enterprise version offers a lot of extra features (read here).

Screenshot of https://cisofy.com/lynis-enterprise/why-upgrade/

View the main Lynis Enterprise site here and the pricing page here

View a tour of features here: https://cisofy.com/lynis-enterprise/

Create a Cisofy Trial Account

You can request a trial of Lynis Enterprise here: https://cisofy.com/demo/

Request a Lynis Enterprise trial screenshot

After the trial account was set up I logged in here. Upon login, I was prompted to add a system to my account (also my licence key was visible)

Lynis portal  main screen

Install Lynis (Clone GIT Repo/latest features)

I am given 3 options to install Lynis from the add system page here.

  1. Add the software repository and install the client (The suggested and easiest way to install Lynis and keep it up-to-date).
  2. Clone the repository from Github (The latest development version, containing the most recent changes)
  3. Manually install or activate an already installed Lynis.

I will clone a fresh install from Github as I prefer seeing the latest issues, latest changes from GitHub notifications. I like getting notifications about security.

I logged into my server via SSH and ran the following command(s).

sudo apt-get instal git
mkdir /thefolder
cd /thefolder
git clone https://github.com/CISOfy/lynis

Cloning into 'lynis'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 10054 (delta 0), reused 1 (delta 0), pack-reused 10047
Receiving objects: 100% (10054/10054), 4.91 MiB | 26.60 MiB/s, done.
Resolving deltas: 100% (7387/7387), done.

I logged into https://portal.cisofy.com/ and clicked ‘Add’ system to find my API key

I noted my licence key.

I then changed to my Lynis folder

cd lynis

I then created a “custom.prf” file

touch custom.prf

I ran this command to activate my licence (I have replaced my licence with ########’s).

View the documentation here.

./lynis configure settings license-key=########-####-####-####-############:upload-server=portal.cisofy.com

Output:

Configuring setting 'license-key'
Setting changed
Configuring setting 'upload-server'
Setting changed

I performed my first scan and uploaded the report.

TIP: Make sure you have curl installed

./lynis audit system --upload

After the scan is complete, make sure you see the following.

Data upload status (portal.cisofy.com) [ OK ]

I logged into https://portal.cisofy.com/enterprise/systems/ and I could view my systems report.

You can read the basic Lynis documentation here: https://cisofy.com/documentation/lynis/

Manual Lynis Scans

I can run a manual scan at any time

cd /thefolder/lynis/
sudo ./lynis audit system --upload

To view results I can login to https://portal.cisofy.com/

Automated Lynis Scans

I have created a bash script that updates Lynis (basically running ‘sudo /usr/bin/git pull origin master’ in the lynis folder)

#!/bin/bash

sendemail -f [email protected] -t [email protected] -u "CRON: Updating Lynis (yourserver.com) START" -m "/folder/runlynis.sh" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp ***my*google*gsuite*email*app*password***

echo "Changing Directory to /folder/lynis"
cd /folder/lynis

echo "Updating Lynis"
sudo /usr/bin/git pull origin master

sendemail -f [email protected] -t [email protected] -u "CRON: Updated Lynis (yourserver.com) END" -m "/folder/runlynis.sh" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp ***my*google*gsuite*email*app*password***

This is my bash script that runs Lynis scans and emails the report

#!/bin/bash

sendemail -f [email protected] -t [email protected] -u "CRON: Run Lynis (yourserver.com) START" -m "/folder/runlynis.sh" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp ***my*google*gsuite*email*app*password***

echo "Running Lynis Scan"
cd /utils/lynis/
sudo /utils/lynis/lynis audit system --upload > /folder/lynis/lynis.txt

sendemail -f [email protected] -t [email protected] -u "CRON: Run Lynis (yourserver.com) END" -m "/folder/runlynis.sh" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp ***my*google*gsuite*email*app*password***  -a /folder/lynis/lynis.txt

I set up two cron jobs to update Lynis (from Git) and to scan with Lynis every day.

#Lynis Update 11:55PM
55 21 * * * /bin/bash /folder/runlynis.sh && curl -fsS --retry 3 https://hc-ping.com/########-####-####-####-############ > /dev/null

#Lynis Scan 2AM
0 2 * * * /bin/bash /folder/runlynis.sh && curl -fsS --retry 3 https://hc-ping.com/########-####-####-####-############ > /dev/null

Thanks to sendemail I get daily emails

I have set up cronjob motoring and emails at the start and end of the bash scripts.

The attachment is not a pretty text file report but a least I can see the output of the scan (without logging into the portal).

Maybe I add the following file also

/var/log/lynis.log

Lynis Enterprise (portal.cisofy.com)

Best of all Lynis Enterprise comes with a great online dashboard available at
https://portal.cisofy.com/enterprise/dashboard/.

Lynis Enterprise Portal

Dashboard (portal.cisofy.com)

Clicking the ‘Dashboard‘ button in the toolbar at the top of the portal reveals a summary of your added systems, alerts, compliance, system integrity, Events and statistics.

Dashboard button

The dashboard has three levels

  • Business (less information)
  • Operational
  • Technical (more information)

Read about the differences here.

three dashboard breadcrumbs

Each dashboard has a limited number of elements, but the technical dashboard has all the elements.

Technical Dashboard

Lynis Enterprise Dashboard https://portal.cisofy.com/enterprise/dashboard/

From here you can click and open server scan results (see below)

Server Details

If you click on a server name you can see detailed information. I created 2 test servers (I am using the awesome UpCloud host)

A second menu appears when you click on a server

Linus Menu

Test Server 01: Ubuntu 18.04 default Scan Results (66/100)

Ubuntu Server Score 66/100

Test Server 02: Debian 9.9 default Scan Results (65/100)

Server

It is interesting to see Debian is 1 point below Ubuntu.

The server page will give a basic summary and highlights like the current and previous hardening score, open ports, firewall status, installed packages, users.

When I click the server name to load the report I can click to see ‘Warnings’ or ‘Suggestions’ to resolve

Suggested System Hardening Actions

I had 47 system hardening recommendations on one system

Lynis identified quick wins.

Some of the security hardening actions included the following.

e.g

  • Audit daemon is enabled with an empty ruleset. Disable the daemon or define rules
  • Incorrect permissions for file /root/.ssh
  • A reboot of the system is most likely needed
  • Found some information disclosure in SMTP banner (OS or software name)
  • Configure maximum password age in /etc/login.defs
  • Default umask in /etc/login.defs could be more strict like 027
  • Add a legal banner to /etc/issue.net, to warn unauthorized users
  • Check available certificates for expiration
  • To decrease the impact of a full /home file system, place /home on a separate partition
  • Install a file integrity tool to monitor changes to critical and sensitive files
  • Check iptables rules to see which rules are currently not used
  • Harden compilers like restricting access to root user only
  • Disable the ‘VRFY’ command
  • Add the IP name and FQDN to /etc/hosts for proper name resolving
  • Purge old/removed packages (59 found) with aptitude purge or dpkg –purge command. This will clean up old configuration files, cron jobs and startup scripts.
  • Remove any unneeded kernel packages
  • Determine if automation tools are present for system management
  • etc

Hardening Suggestion (Ignore or Solve)

If you click ‘Solve‘ Cisofy will provide a link to detailed information to help you solve issues.

Suggested fix: ACCT-9630 Audit daemon is enabled with an empty ruleset. Disable the daemon or define rules

I will not list every suggested problem and fix but here are some fixes below.

ACCT-9630 Audit daemon is enabled with an empty ruleset. Disable the daemon or define rules (fixed)

TIP: If you don’t have auditd installed run this command below to install it

sudo apt-get install auditd
/etc/init.d/auditd start
/etc/init.d/auditd status

I added the following to ‘/etc/audit/rules.d/audit.rules‘ (thanks to the solution recommendations on the Cisofy portal.

# This is an example configuration suitable for most systems
# Before running with this configuration:
# - Remove or comment items which are not applicable
# - Check paths of binaries and files

###################
# Remove any existing rules
###################

-D

###################
# Buffer Size
###################
# Might need to be increased, depending on the load of your system.
-b 8192

###################
# Failure Mode
###################
# 0=Silent
# 1=printk, print failure message
# 2=panic, halt system
-f 1

###################
# Audit the audit logs.
###################
-w /var/log/audit/ -k auditlog

###################
## Auditd configuration
###################
## Modifications to audit configuration that occur while the audit (check your paths)
-w /etc/audit/ -p wa -k auditconfig
-w /etc/libaudit.conf -p wa -k auditconfig
-w /etc/audisp/ -p wa -k audispconfig

###################
# Monitor for use of audit management tools
###################
# Check your paths
-w /sbin/auditctl -p x -k audittools
-w /sbin/auditd -p x -k audittools

###################
# Special files
###################
-a exit,always -F arch=b32 -S mknod -S mknodat -k specialfiles
-a exit,always -F arch=b64 -S mknod -S mknodat -k specialfiles

###################
# Mount operations
###################
-a exit,always -F arch=b32 -S mount -S umount -S umount2 -k mount
-a exit,always -F arch=b64 -S mount -S umount2 -k mount

###################
# Changes to the time
###################
-a exit,always -F arch=b32 -S adjtimex -S settimeofday -S stime -S clock_settime -k time
-a exit,always -F arch=b64 -S adjtimex -S settimeofday -S clock_settime -k time
-w /etc/localtime -p wa -k localtime

###################
# Use of stunnel
###################
-w /usr/sbin/stunnel -p x -k stunnel

###################
# Schedule jobs
###################
-w /etc/cron.allow -p wa -k cron
-w /etc/cron.deny -p wa -k cron
-w /etc/cron.d/ -p wa -k cron
-w /etc/cron.daily/ -p wa -k cron
-w /etc/cron.hourly/ -p wa -k cron
-w /etc/cron.monthly/ -p wa -k cron
-w /etc/cron.weekly/ -p wa -k cron
-w /etc/crontab -p wa -k cron
-w /var/spool/cron/crontabs/ -k cron

## user, group, password databases
-w /etc/group -p wa -k etcgroup
-w /etc/passwd -p wa -k etcpasswd
-w /etc/gshadow -k etcgroup
-w /etc/shadow -k etcpasswd
-w /etc/security/opasswd -k opasswd

###################
# Monitor usage of passwd command
###################
-w /usr/bin/passwd -p x -k passwd_modification

###################
# Monitor user/group tools
###################
-w /usr/sbin/groupadd -p x -k group_modification
-w /usr/sbin/groupmod -p x -k group_modification
-w /usr/sbin/addgroup -p x -k group_modification
-w /usr/sbin/useradd -p x -k user_modification
-w /usr/sbin/usermod -p x -k user_modification
-w /usr/sbin/adduser -p x -k user_modification

###################
# Login configuration and stored info
###################
-w /etc/login.defs -p wa -k login
-w /etc/securetty -p wa -k login
-w /var/log/faillog -p wa -k login
-w /var/log/lastlog -p wa -k login
-w /var/log/tallylog -p wa -k login

###################
# Network configuration
###################
-w /etc/hosts -p wa -k hosts
-w /etc/network/ -p wa -k network

###################
## system startup scripts
###################
-w /etc/inittab -p wa -k init
-w /etc/init.d/ -p wa -k init
-w /etc/init/ -p wa -k init

###################
# Library search paths
###################
-w /etc/ld.so.conf -p wa -k libpath

###################
# Kernel parameters and modules
###################
-w /etc/sysctl.conf -p wa -k sysctl
-w /etc/modprobe.conf -p wa -k modprobe
###################

###################
# PAM configuration
###################
-w /etc/pam.d/ -p wa -k pam
-w /etc/security/limits.conf -p wa -k pam
-w /etc/security/pam_env.conf -p wa -k pam
-w /etc/security/namespace.conf -p wa -k pam
-w /etc/security/namespace.init -p wa -k pam

###################
# Puppet (SSL)
###################
#-w /etc/puppet/ssl -p wa -k puppet_ssl

###################
# Postfix configuration
###################
#-w /etc/aliases -p wa -k mail
#-w /etc/postfix/ -p wa -k mail
###################

###################
# SSH configuration
###################
-w /etc/ssh/sshd_config -k sshd

###################
# Hostname
###################
-a exit,always -F arch=b32 -S sethostname -k hostname
-a exit,always -F arch=b64 -S sethostname -k hostname

###################
# Changes to issue
###################
-w /etc/issue -p wa -k etcissue
-w /etc/issue.net -p wa -k etcissue

###################
# Log all commands executed by root
###################
-a exit,always -F arch=b64 -F euid=0 -S execve -k rootcmd
-a exit,always -F arch=b32 -F euid=0 -S execve -k rootcmd

###################
## Capture all failures to access on critical elements
###################
-a exit,always -F arch=b64 -S open -F dir=/etc -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/bin -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/home -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/sbin -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/srv -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/usr/bin -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/usr/local/bin -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/usr/sbin -F success=0 -k unauthedfileacess
-a exit,always -F arch=b64 -S open -F dir=/var -F success=0 -k unauthedfileacess

###################
## su/sudo
###################
-w /bin/su -p x -k priv_esc
-w /usr/bin/sudo -p x -k priv_esc
-w /etc/sudoers -p rw -k priv_esc

###################
# Poweroff/reboot tools
###################
-w /sbin/halt -p x -k power
-w /sbin/poweroff -p x -k power
-w /sbin/reboot -p x -k power
-w /sbin/shutdown -p x -k power

###################
# Make the configuration immutable
###################
-e 2

# EOF

I reloaded my audit daemon config

auditctl -R /etc/audit/rules.d/audit.rules

Further configuration can be added (read this), read the auditd man page here or read logs you can use the ‘auditsearch‘ tool (read the Ubuntu Man Page here)

Here is a great guide on viewing audit events.

Because we have this rule ( ‘-w /etc/passwd -p wa -k etcpasswd ) to monitor the passwords file, If I read the contents of \etc\passwd it will show up in the audit logs.

We can verify the access of this file by running this command

ausearch -f /etc/passwd

Output

ausearch -f /etc/passwd
----
time->Mon Jun 10 16:58:13 2019
type=PROCTITLE msg=audit(##########.897:3639): proctitle=##########################
type=PATH msg=audit(##########.897:3639): item=1 name="/etc/passwd" inode=1303 dev=fc:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=PATH msg=audit(##########.897:3639): item=0 name="/etc/" inode=12 dev=fc:01 mode=040755 ouid=0 ogid=0 rdev=00:00 nametype=PARENT cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=CWD msg=audit(##########.897:3639): cwd="/root"
type=SYSCALL msg=audit(##########.897:3639): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=556241ea9650 a2=441 a3=1b6 items=2 ppid=1571 pid=1572 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=446 comm="nano" exe="/bin/nano" key="etcpasswd"

I might write a list of handy ausearech commands and blog about this in the future

SSH Permissions (fixed)

to fish the ssh permissions warning I ran the command to show the issue on my server

./lynis show details FILE-7524
2019-05-25 23:00:04 Performing test ID FILE-7524 (Perform file permissions check)
2019-05-25 23:00:04 Test: Checking file permissions
2019-05-25 23:00:04 Using profile /utils/lynis/default.prf for baseline.
2019-05-25 23:00:04 Checking /etc/lilo.conf
2019-05-25 23:00:04   Expected permissions:
2019-05-25 23:00:04   Actual permissions:
2019-05-25 23:00:04   Result: FILE_NOT_FOUND
2019-05-25 23:00:04 Checking /root/.ssh
2019-05-25 23:00:04   Expected permissions: rwx------
2019-05-25 23:00:04   Actual permissions: rwxr-xr-x
2019-05-25 23:00:04   Result: BAD
2019-05-25 23:00:04 Warning: Incorrect permissions for file /root/.ssh [test:FILE-7524] [details:-] [solution:-]
2019-05-25 23:00:04 Using profile /utils/lynis/custom.prf for baseline.
2019-05-25 23:00:04 Checking permissions of /utils/lynis/include/tests_homedirs
2019-05-25 23:00:04 File permissions are OK
2019-05-25 23:00:04 ===---------------------------------------------------------------===

I tightened permissions on the /root/.ssh folder with this command

chmod 700 /root/.ssh

Configure minimum/maximum password age in /etc/login.defs (fixed)

I set a maximum and minimum password age in ‘/etc/login.defs‘

Defaults

PASS_MAX_DAYS   99999
PASS_MIN_DAYS   0
PASS_WARN_AGE   7

Add a legal banner to /etc/issue, to warn unauthorized users (fixed)

I edited ‘/etc/issue’ on Ubuntu and Linux

Ubuntu 18.04 default

Ubuntu 18.04.2 LTS \n \l

Debian Default

Debian GNU/Linux 9 \n \l

Cisofy said this “Define a banner text to inform both authorized and unauthorized users about the machine and service they are about to access. The purpose is to share your policy before an access attempt is being made. Users should know that there privacy might be invaded, due to monitoring of the system and its resources, to protect the integrity of the system. Also unauthorized users should be deterred from trying to access it in the first place.“

Done

Default umask in /etc/login.defs could be more strict like 027 (fixed)

Related files..

  • /etc/profile
  • /etc/login.defs
  • /etc/passwd

I edited ‘/etc/login.defs’ and set

UMASK           027

I ran

umask 027 /etc/profile
umask 027 /etc/login.defs
umask 027 /etc/passwd

Check iptables rules to see which rules are currently not used (fixed)

I ran the following command to review my firewall settings

iptables --list --numeric --verbose

TIP: Scan for open ports with ‘nmap’

Watch this handy video if you are not sure how to use nmap

Install nmap

sudo apt-get install nmap

I do set firewall rules in ufw (guide here) and ufw is a front end for iptables.

Scan for open ports with nmap

nmap -v -sT localhost

Starting Nmap 7.60 ( https://nmap.org ) at 2019-06-12 22:09 AEST
Initiating Connect Scan at 22:09
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 443/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 8080/tcp on 127.0.0.1
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Completed Connect Scan at 22:09, 0.02s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00012s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
443/tcp  open  https
8080/tcp open  http-proxy

Everything looked good.

Harden compilers like restricting access to root user only (fixed)

Cicofy said

Compilers turn source code into binary executable code. For a production system a compiler is usually not needed, unless package upgrades are performed by means of their source code (like FreeBSD ports collection). If a compiler is found, execution should be limited to authorized users only (e.g. root user).

To solve this finding, remove any unneeded compiler or change the file permissions. Usually chmod 700 or chmod 750 will be enough to prevent normal users from using a compiler. Related compilers are as, cc, ld, gcc, go etc. To determine what files are affected, check the Lynis log file, then chmod these files.

I ran

chmod 700 /usr/bin/as
chmod 700 /usr/bin/gcc

Turn off PHP information exposure (fixed)

Cisofy siad

Disable the display of version information by setting the expose_php option to 'Off' in php.ini. As several instances of PHP might be installed, ensure that all related php.ini files have this setting turned off, otherwise this control will show up again.

This was already turned off but a unused php.ini may have been detected.

I searched for all php.ini files

find / -name php.ini

Output

/etc/php/7.3/apache2/php.ini
/etc/php/7.3/fpm/php.ini
/etc/php/7.3/cli/php.ini

yep, the cli version of php.ini had the following

expose_php = On

I set this to Off

Purge old/removed packages (59 found) with aptitude purge or dpkg –purge command. This will cleanup old configuration files, cron jobs and startup scripts. (fixed)

Cisofy said

While not directly a security concern, unpurged packages are not installed but still have remains left on the system (e.g. configuration files). In case software is reinstalled, an old configuration might be applied. Proper cleanups are therefore advised.

To remove the unneeded packages, select the ones marked with the 'rc' status. This means the package is removed, but the configuration files are still there.

I ran the following recommended command

dpkg -l | grep "^rc" | cut -d " " -f 3 | xargs dpkg --purge

Done

Install debsums utility for the verification of packages with known good database. (fixed)Cisofy said

Install the debsums utility to do more in-depth auditing of your packages.

I ran the following suggested command

apt-get install debsums

I googled and found this handy page

I scanned packages and asked ‘debsums” to only show errors with this command

sudo debsums -s

The only error was..

debsums: missing file /usr/bin/pip (from python-pip package)

I did not need pip so I removed it

apt-get remove --purge python-pip

Install a PAM module for password strength testing like pam_cracklib or pam_passwdqc (fixed)

I ignore this as I do not allow logins via password and only I have an account (it’s not a multi user system).

I white list logins to IP’s.

I only allow ssh access with a private key and long passphrase.

I have 2FA OTP enabled at logins.

I have cloudflare over my domain.

I setup fail2ban to auto block logins using this guide

Reboot (fixed)

I restated the server

shutdown -r now

Done

Check available certificates for expiration (fixed)

I tested my SSL certificate with https://dev.ssllabs.com

https://dev.ssllabs.com/ scan of my site

Add legal banner to /etc/issue.net, to warn unauthorized users (fixed)

Cisofy said…

Define a banner text to inform both authorized and unauthorized users about the machine and service they are about to access. The purpose is to share your policy before an access attempt is being made. Users should know that there privacy might be invaded, due to monitoring of the system and its resources, to protect the integrity of the system. Also unauthorized users should be deterred from trying to access it in the first place.

Do not reveal sensitive information, like the specific goal of the machine, or what can be found on it. Consult with your legal department, to determine appropriate text.

I edited the file ‘/etc/issue.net’ and added a default pre login message (same as ‘/etc/issue’).

Install Apache mod_evasive to guard webserver against DoS/brute force attempts (ignored)

I ignored this message and I don’t use the Apache (I use the Nginx web server). I have added Apache to be blocked from installing.

I clicked Ignore in the Cisofy portal.

Ignore Button

Install Apache modsecurity to guard webserver against web application attacks (ignored)

I clicked Ignore for this one too

Ignore Button

Check your Nginx access log for proper functioning (reviewed)

Cisofy said…

Disabled logging:
Check in the Lynis log for entries which are disabled, or in the nginx configuration (access_log off).

Missing logging:
Check for missing log files. They are references in the configuration of nginx, but not on disk. The Lynis log will reveal to what specific files this applies.

I checked my Nginx config (‘/etc/nginx/nginx.conf‘) for all log references and ensured the logs were writing to disk (OK).

I checked my ‘/etc/nginx/sites-available/default‘ config and I did have 2 settings of ‘access_log off ‘ (this was added during the setup for two sub reporting subfolders for the Nixstats agent.

I restarted Nginx

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Check what deleted files are still in use and why. (fixed)

Cisofy said..

Why it matters
Deleted files may sometimes be in use by applications. Normally this should not happen, as an application should delete a file and release the file handle. This test might discover malicious software, trying to hide its presence on the system. Investigate the related files by determining which application keeps it open and the related reason.

Details
The following details have been found as part of the scan.

/lib/systemd/systemd-logind(systemd-l)
/tmp/ib1ekCtf(mysqld)
/tmp/ibhuK1At(mysqld)
/tmp/ibmTO5F5(mysqld)
/tmp/ibR0dkxD(mysqld)
/tmp/ibvf69KH(mysqld)
/tmp/.ZendSem.gq3mnz(php-fpm7.)
/usr/bin/python3.6(networkd-)
/usr/bin/python3.6(unattende)
/var/log/mysql/error.log.1(mysqld)

I ran the following command to show deleted files in use

lsof | grep deleted

I noticed on my database server a php-fpm service was using files. I don’t have a webserver enabled on this server, so I uninstalled the web-based services.

I have separate web and database servers.

sudo apt-get remove apache*
sudo apt-get remove -y --purge nginx*
sudo apt-get remove -y --purge php7*
sudo apt autoremove

Check DNS configuration for the dns domain name (fixed)

Cisofy said..

Some software can work incorrectly when the system can't resolve itself. 
Add the IP name and fully qualified domain name (FQDN) to /etc/hosts. Usually this is done with an entry of 127.0.0.1, or 127.0.1.1 (to leave the localhost entry alone). 

I edited my ‘/etc/hosts’ file

I added a domain name to the end of the localhost entry and added a new line with my server(s) IP and domain name

Disable the ‘VRFY’ command (fixed)

I was advised to run this command

postconf -e disable_vrfy_command=yes

(Debian) Enable sysstat to collect accounting (no results) (fixed)

Cisofy said..

The sysstat is collection of utilities to provide system information insights. While one should aim for the least amount of packages, the sysstat utilities can be a good addition to help recording system details. They can provide insights for performance monitoring, or guide in discovering unexpected events (like a spam run). If you already use extensive system monitoring, you can safely ignore this control.

I ran the suggested commands

apt-get install sysstat
sed -i 's/^ENABLED="false"/ENABLED="true"/' /etc/default/sysstat

More info on sysstat here.

Consider running ARP monitoring software (arpwatch,arpon) (fixed)

Cisofy said

Networks are very dynamic, often with devices come and go as they please. For sensitive machines and network zones, you might want to know what happens on the network itself. An utility like arpwatch can help tracking changes, like new devices showing up, or others leaving the network.

I read this page to setup and configure arpwatch

sudo apt-get install arpwatch
/etc/init.d/arpwatch start

I will add more on how to use arpwatch soon

Disable drivers like USB storage when not used, to prevent unauthorized storage or data theft (fixed)

Cosofy siad..

Disable drivers like USB storage when not used. This helps preventing unauthorized storage, data copies, or data theft.

I ran the suggested fix

echo "# Block USB storage" >> /etc/modprobe.d/disable-usb-storage.conf
echo "install usb-storage /bin/false" >> /etc/modprobe.d/disable-usb-storage.conf

Determine if automation tools are present for system management (ignored)

I ignored this one

Ignore Button

One or more sysctl values differ from the scan profile and could be tweaked

Cisofy said..

By means of sysctl values we can adjust kernel related parameters. Many of them are related to hardening of the network stack, how the kernel deals with processes or files. This control is a generic test with several sysctl variables (configured by the scan profile).

I was advised to adjust these settings

  • net.ipv4.conf.all.send_redirects=0
  • net.ipv4.conf.default.accept_source_route=0
  • kernel.sysrq=0
  • net.ipv4.conf.all.log_martians=1
  • net.ipv4.conf.default.log_martians=1
  • kernel.core_uses_pid=1
  • kernel.kptr_restrict=2
  • fs.suid_dumpable=0
  • kernel.dmesg_restrict=1

I edited ‘/etc/sysctl.conf‘ and made the advised changes along with these (I Googled each item first)

Install a file integrity tool to monitor changes to critical and sensitive files (fixed)

Cisofy said..

To monitor for unauthorized changes, a file integrity tool can help with the detection of such event. Each time the contents or the properties of a file change, it will have a different checksum. With regular checks of the related integrity database, discovering changes becomes easy. Install a tool like AIDE, Samhain or Tripwire to monitor important system and data files. Additionally configure the tool to alert system or security personnel on events.

It also gave a solution

# Step 1: Install package with appropriate command
apt-get install aide
yum install aide

# Step 2: Initialise database
aide --init
# If this fails: try aideinit

# Step 3: Copy newly created database (/var/lib/aide)
cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

# Step 4:
aide --check

I installed ‘aide’ (read the guide here).

TIP: Long story but the steps above were not exactly correct. Thanks to this post for I was able to set up aide. without seeing this error.

Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db.new for writing

This is how I installed aide

apt-get install aide
apt-get install aide-common

I initialised aide.

aideinit

This was the important part (I was stuck for hours on this one)

aide.wrapper --check

I can run the following to see what files have changed.

I could see many files have changed since the initial scan (e.g mysql, log files nano search history).

Nice

Now lets schedule daily checks and create a cron job.

cat /folder/runaide.sh
#!/bin/bash

sendemail -f [email protected] -t [email protected] -u "CRON: AIDE Run (yourserver.com) START" -m "/folder/runaide.sh" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp ***my*google*gsuite*email*app*password***

MYDATE=`date +%Y-%m-%d`
MYFILENAME="Aide-"$MYDATE.txt
/bin/echo "Aide check !! `date`" > /tmp/$MYFILENAME
/usr/bin/aide.wrapper --check > /tmp/myAide.txt
/bin/cat /tmp/myAide.txt|/bin/grep -v failed >> /tmp/$MYFILENAME
/bin/echo "**************************************" >> /tmp/$MYFILENAME
/usr/bin/tail -100 /tmp/myAide.txt >> /tmp/$MYFILENAME
/bin/echo "****************DONE******************" >> /tmp/$MYFILENAME

#/usr/bin/mail -s"$MYFILENAME `date`" [email protected] < /tmp/$MYFILENAME

sendemail -f [email protected] -t [email protected] -u "CRON: AIDE Run (yourserver.com) END" -m "/folder/runaide.sh" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp ***my*google*gsuite*email*app*password*** -a /tmp/$MYFILENAME -a /tmp/myAide.txt

Above thanks to this post

I setup a cron job to run this daily

#Run AIDE
0 6 * * * /folder/runaide.sh && curl -fsS --retry 3 https://hc-ping.com/######-####-####-####-############> /dev/null

ACCT-9622 – Enable process accounting. (fixed)

Solution:

Install “acct” process and login accounting.

sudo apt-get install acct

Start the “acct” service

/etc/init.d/acct start
touch /var/log/pacct
chown root /var/log/pacct
chmod 0644 /var/log/pacct
accton /var/log/pacct 

Check the status

/etc/init.d/acct status
* acct.service - LSB: process and login accounting
   Loaded: loaded (/etc/init.d/acct; generated)
   Active: active (exited) since Sun 2019-05-26 19:42:15 AEST; 4min 42s ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 0 (limit: 4660)
   CGroup: /system.slice/acct.service

May 26 19:42:15 servername systemd[1]: Starting LSB: process and login accounting...
May 26 19:42:15 servername acct[27419]: Turning on process accounting, file set to '/var/log/account/pacct'.
May 26 19:42:15 servername systemd[1]: Started LSB: process and login accounting.
May 26 19:42:15 servername acct[27419]:  * Done.

Run CISOfy recommended commands

touch /var/log/pacct
chown root /var/log/pacct
chmod 0644 /var/log/pacct
accton /var/log/pacct 

Manual Scan of Lynis

I re-ran an audit of the system (and uploaded the report to the portal) so I can see how I am progressing.

./lynis audit system --upload

I then checked the error status and the warnings were resolved.

Progress?

I rechecked my servers and all warnings are solved, now I just need to work on information level issues

Warning level errors fixed,  and informational to go

Cisofy Portal Overview

Quick breakdown of the Cisofy Portal

Overview Tab (portal.cisofy.com)

The Overview lab displays any messages, change log, API information, add a new system link, settings etc.

Lynis Overview tab

Dashboard Tab (portal.cisofy.com)

The dashboard tab will display compliant systems any outdated systems, alerts and events.

Lynis Dashboard screenshot https://portal.cisofy.com/enterprise/dashboard/

TIP: If you have a system that reports “Outdated” run the following command.

./lynis audit system --upload

Systems Tab (portal.cisofy.com)

The systems tab shows all systems, OS version, warnings, information counts, the date the system’s client last uploaded a report and the client version.

Systems tab shows all systems, OS version, warnings, information counts, date client last uploaded a report update and client version

If you are making many changes and manual Lynis scans keep an eye on your upload credits, You can see by the above and below image, I have lowered my suggested actions to harden my servers (red text).

Lynis scans reached

Clicking a host name reveals a summary of the system.

Clicking a system reveals a summary of the system.

Remaining information level issues are listed.

I can click Solve and see more information about the issue to resolve.

TIP: I thought it would be a good idea to copy this list to a spreadsheet for detailed tracking.

Spreadsheet listing issues to complete and done

I had another issue appear a few days later.

Compliance Tab (portal.cisofy.com)

A lot of information is listed here.

Compliance Tab

Best practice guides are available

best practice ghttps://portal.cisofy.com/compliance/udes

I could go on an on but https://cisofy.com/ is awesome.

TIP: Manually updating Lynis

from the command line I can view the Linus version with this command

./lynis --version
2.7.4

To update the Lynis git repository from the Lynis folder run this command

git pull
Already up to date.

Automatically updating and running Lynis scans

I added the following commands to my crontab to update then scan and report Lynis results to the portal.

TIP: Use https://crontab.guru/ to choose the right time to run commands (I chose 5 mins past 1 AM every day to update and 5 mins past 2 AM to run a scan.


#Lynis Update
5 1 * * * root -s /bin/bash -c 'cd /utils/lynis && /usr/bin/git pull origin master'

#Lynis Scan
5 2 * * * root -s /bin/bash -c '/utils/lynis/lynis audit system --upload'

Troubleshooting

fyi: Lynis Log file location: /var/log/lynis.log

Cisofy Enterprise Conclusion

Pros:

  • I can learn so much about securing Linux just from the Cisofy Fix recommendations.
  • I have secured my server beyond what I thought possible.
  • Very active development on Github: https://github.com/CISOfy/lynis/
  • Cisofy has a very good inteface and updates often.
  • New security issues are synced down and included in new scans (if you update)

Cons:

  • I am unable to pay for this for my servers here in Australia (European legal issues).
  • Needs Hardware 2FA

Tips

Make sure you have curl installed to allow reports to upload. I had this error on Debian 9.4.

View the latest repository version information here.

I added my Lynis folder to the Linux $PATH variable

export PATH=$PATH:/folder/lynis

Fatal: can’t find curl binary. Please install the related package or put the binary in the PATH. Quitting..

Lynis Enterprise API

View the Lynis Enterprise API documentation here

Lynis Enterprise Support

Support can be found here, email support [email protected].

Getting started guide is found here.

Bonus: Setting Up Content Security Policy and reporting violations to https://report-uri.com/

I have a few older posts on Content Security Policies (CSP) but they are a bit dated.

  • 2016 – Beyond SSL with Content Security Policy, Public Key Pinning etc
  • 2018 – Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx

Wikipedia Definition of a Content Security Policy

Content Security Policy (CSP) is a computer security standard introduced to prevent cross-site scripting (XSS), clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context.[1] It is a Candidate Recommendation of the W3C working group on Web Application Security,[2] widely supported by modern web browsers.[3] CSP provides a standard method for website owners to declare approved origins of content that browsers should be allowed to load on that website—covered types are JavaScript, CSS, HTML frames, web workers, fonts, images, embeddable objects such as Java applets, ActiveX, audio and video files, and other HTML5 features.

If you want to learn about to setup CSP’s head over to https://report-uri.com/products/content_security_policy or https://report-uri.com/home/tools and read more.

I did have Content Security Policies (CSP) set up a few years back, but I had issues with broken resources. A lack of time on my behalf to investigate the issues forced me to disable the Content Security Policy (CSP). I should have changed the “Content-Security-Policy” header to “Content-Security-Policy-Report-Only.”

I will re-add the Content Security Policy (CSP) to my site but this time I will not disable it and will report to https://report-uri.com/, and if need be I will change the header from “content-security-policy” to “content-security-policy-report-only”. That way a broken policy won’t take down my site in future.

If you want to set up a Content Security Policy header and with good reporting of any violations of your CSP policy simply head over to https://report-uri.com/ and create a new account.

Read the official Report URI help documents here: https://docs.report-uri.com/.

Create a Content Security Policy

The hardest part of creating a Content Security Policy is knowing what to add where.

You could generate your own Content Security Policy by heading here (https://report-uri.com/home/generate) but that will take a while.

Create a CSP

TIP: Don’t make your policy live straight away by using the “Content-Security-Policy” header, instead use the “Content-Security-Policy-Report-Only” header.

To create a content Security Policy faster I would recommend you to use this Firefox plugin to generate a starter Content Security Policy.

Screenshot of https://addons.mozilla.org/en-US/firefox/addon/laboratory-by-mozilla/

Install this plugin to Firefox, enable it and click the Plugins icon and ensure “Record this site…” is ticked.

Laboratory plugin inFirefix

Then simply browse to your site (browse as many pages as possible) and a Content Security Policy will be generated based on the content on the page(s) loaded.

TIP: Always review the generated CSP, it allows everything needed to display your site.

Export the CSP from the Firefox plugin to the clipboard

This is the policy that was generated for me in 5 minutes browsing 20 pages.

default-src 'none'; connect-src 'self' https://onesignal.com/api/v1/apps/772f27ad-0d58-494f-9f06-e89f72fd650b/icon https://onesignal.com/api/v1/notifications https://onesignal.com/api/v1/players/67a2f360-687f-4513-83e8-f477da085b26 https://onesignal.com/api/v1/players/67a2f360-687f-4513-83e8-f477da085b26/on_session https://yoast.com/feed/widget/; font-src 'self' data: https://fearby-com.exactdn.com https://fonts.gstatic.com; form-action 'self' https://fearby.com https://syndication.twitter.com https://www.paypal.com; frame-src 'self' https://en-au.wordpress.org https://fearby.com https://googleads.g.doubleclick.net https://onesignal.com https://platform.twitter.com https://syndication.twitter.com https://www.youtube.com; img-src 'self' data: https://a.impactradius-go.com https://abs.twimg.com https://fearby-com.exactdn.com https://healthchecks.io https://pagead2.googlesyndication.com https://pbs.twimg.com https://platform.twitter.com https://secure.gravatar.com https://syndication.twitter.com https://ton.twimg.com https://www.paypalobjects.com; script-src 'self' 'unsafe-eval' 'unsafe-inline' https://adservice.google.com.au/adsid/integrator.js https://adservice.google.com/adsid/integrator.js https://cdn.onesignal.com/sdks/OneSignalPageSDKES6.js https://cdn.onesignal.com/sdks/OneSignalSDK.js https://cdn.syndication.twimg.com/tweets.json https://fearby-com.exactdn.com/wp-content/cache/fvm/1553589606/out/footer-45a3439e.min.js https://fearby-com.exactdn.com/wp-content/cache/fvm/1553589606/out/footer-e6604f67.min.js https://fearby-com.exactdn.com/wp-content/cache/fvm/1553589606/out/footer-f4213fd6.min.js https://fearby-com.exactdn.com/wp-content/cache/fvm/1553589606/out/header-1583146a.min.js https://fearby-com.exactdn.com/wp-content/cache/fvm/1553589606/out/header-823c0a0e.min.js https://fearby-com.exactdn.com/wp-content/piwik.js https://onesignal.com/api/v1/sync/772f27ad-0d58-494f-9f06-e89f72fd650b/web https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js https://pagead2.googlesyndication.com/pagead/js/r20190610/r20190131/show_ads_impl.js https://pagead2.googlesyndication.com/pub-config/r20160913/ca-pub-9241521190070921.js https://platform.twitter.com/js/moment~timeline~tweet.a20574004ea824b1c047f200045ffa1e.js https://platform.twitter.com/js/tweet.73b7ab8a56ad3263cad8d36ba66467fc.js https://platform.twitter.com/widgets.js https://s.ytimg.com/yts/jsbin/www-widgetapi-vfll-F3yY/www-widgetapi.js https://www.googletagservices.com/activeview/js/current/osd.js https://www.youtube.com/iframe_api; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com/ https://onesignal.com/sdks/ https://platform.twitter.com/css/ https://ton.twimg.com/tfw/css/; worker-src 'self' 

I can truncate starter Content Security Polity and remove some elements. Remove duplicated entries to separate files on a remote server add a wildcard (if I trust the server).

I truncated the policy with the help of the sublime text editor and Report URI CSP Generator.

I added this to the file ‘/etc/nginx/sites-available/default’

add_header "Content-Security-Policy-Report-Only" "default-src 'self' https://fearby.com/; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://adservice.google.com.au https://adservice.google.com https://cdn.onesignal.com https://cdn.syndication.twimg.com https://fearby-com.exactdn.com https://onesignal.com https://pagead2.googlesyndication.com https://platform.twitter.com https://s.ytimg.com https://www.googletagservices.com https://www.youtube.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com https://onesignal.com https://platform.twitter.com https://ton.twimg.com; img-src 'self' data: https://a.impactradius-go.com https://abs.twimg.com https://fearby-com.exactdn.com https://healthchecks.io https://pagead2.googlesyndication.com https://pbs.twimg.com https://platform.twitter.com https://secure.gravatar.com https://syndication.twitter.com https://ton.twimg.com https://www.paypalobjects.com; font-src 'self' data: https://fearby-com.exactdn.com https://fonts.gstatic.com; connect-src 'self' https://onesignal.com https://yoast.com; object-src https://fearby.com/; frame-src 'self' https://en-au.wordpress.org https://fearby.com https://googleads.g.doubleclick.net https://onesignal.com https://platform.twitter.com https://syndication.twitter.com https://www.youtube.com; worker-src 'self'; form-action 'self' https://fearby.com https://syndication.twitter.com https://www.paypal.com; report-uri https://fearby.report-uri.com/r/d/csp/reportOnly";

I added the following to the file ‘/etc/nginx/sites-available/default‘ (inside the server node).

Any issues with the Content Security policy will be reported to my web browsers development console and to https://report-uri.com/.

My Chrome development console reports an issue with a graphic not loading from Namecheap.

Namecleap icon not loading

The event was also reported to the Report URI server.

Screenshot of reports at https://report-uri.com/account/reports/csp/

Don’t forget to check the reports often. When you have no more issues left you can make the Policy live by renaming the “Content-Security-Policy-Report-Only” header to “Content-Security-Policy”.

FYI: I had directive reports of ‘script-src-elem’ and it looks like they are new directives added to Chrome 75.

Don’t forget to visit the Report URI setup page and get a URL for where live reports get sent to.

Screenshot of https://report-uri.com/account/setup/

If you go to the Generate CSP page and import your website’s policy you can quickly add new exclusions to your policy

After a few months of testing and tweaking the policy, I can make it live (‘Content-Security-Policy’).

Lynis Enterprise

I have learned so much by using Lynis Enterprise from https://cisofy.com/

I am subscribed to issues notifications at https://github.com/CISOfy/lynis/issues/ and observe about 20 notifications a day in this GitHub community. Maybe one day I will contribute to this project?

Finally, Did the Bank reply?

Yes but it was not very informative.

Dear Simon,

Thank you very much  for the information and we have completely removed the reference that you have raised concern.
We are extremely sorry and apology for the inconvenience caused due to this mistake.

We are thankful for the information and support you have extended.

I tried to inquire how this happened and each time the answer was vague.

Thank you for your support. This was mistakenly used during the testing and we have warned the vendor as well.
I like to request you to close the ticket for this as we have already removed this.

We like to assure such things won’t happen in future.

It looks like the bank used my blog post to create their CSP.

Oh well at least I have a secured my servers.

Thanks for reading.

 

 

Version:

v1.1 – Changed the URL, Removed Ads and added a Lynis Enterprise Conclusion

v1.01 – Fixed the URL

v1.0 – Initial Version

Filed Under: 2nd Factor, CDN, Content Security Policy, Cron, Database, Debian, NGINX, One Signal, PHP, Security, Ubuntu, Vulnerabilities, Vulnerability, Weakness, Website Tagged With: Bank, Cisofy, Content Security Policy, Hacked, Linus

Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare

July 15, 2018 by Simon

This is how I set up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare to setup DNS security extensions

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

About DNSSEC

Wikipedia has a great write-up on DNSSEC also read the ICANN page on DNSSEC.

Snip “DNSSEC is a set of extensions to DNS which provide to DNS clients (resolvers) origin authentication of DNS data, authenticated denial of existence, and data integrity, but not availability or confidentiality.”

https://daniellbenway.net has a great video explaining DNSSEC

Handy DNSSEC Links

  • NameCheap – What is DNSSEC.
  • NameCheap – How can I check if DNSSEC is working?
  • Namecheap – Managing DNSSEC
  • dnsviz.net – View my DNSSEC Status
  • Cloudflare – How does DNS Sec Work?
  • IETF RTC 3685 – Delegation Signer (DS) Resource Record (RR)
  • DNSSEC – Domain Name System – Security Extensions

Let’s view my DNSSEC status now

https://dnssec-analyzer.verisignlabs.com/ can help you check your sites DNSSEC status.

DNSSEC Analyzer - https://dnssec-analyzer.verisignlabs.com/

Prerequisites

This guide assumes you have already purchased a domain and set it up with say UpCloud hosting (read my setup guide here).

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Read my old guide here that I created while setting up Cloudflare on the Vultr host to see how to setup Cloudflare.

Setting up DNSSEC

First I logged into My Namecheap account, selected my domain, selected Advanced DNS and enabled DNSSEC.

Screenshot of Namecheap Advanced DNS page

I can see a number of values for DNSSEC KeyType/Algorithm/Digest Type and Digest. Below are the options in the dropdowns for Algorithm and Digest Type.

DNSSEC Algorithms

DNSSEC Algorithms (RSA, MD5 etc)

DNSSEC Digest Types

DNSSEC Digest Types (SHA etc)

I contacted NameCheap support and they said I needed to contact my UpCloud hosts to get relevant DNSSEC values.

My domain was purchased at NameCheap but by domain routers by Cloudflare DNS.

Namecheap DNS Nameservers pointing to cloudflare

By chance, I logged into my Cloudflare account and noticed they have a DNSSEC section under DNS. Nice.

Screenshot of Cloudflare menu, DNS highlighted.

I enabled DNSSEC

Enable Cloudflare DNSSEC records

Cloudflare offers all the relevant DNSSEC values.

Screenshot of Cloudflare DNSSEC generated Values

I entered these values into Namecheap under Advanced DNS on my domain.

Screenshot fo adding a DNS record at Namecheap

After 5 mins re-ran the DNSSEC Analyzr tool.

Screenshot of http://dnssec-debugger.verisignlabs.com/ Results

Hmmm, Cloudflare seems to think something is wrong 🙁

Screenshot of Cloudflare saying DNSSEC is not configured

I ran a DNS DS Lookup on my site. Everything appears ok.

Screenshot of https://mxtoolbox.com/SuperTool.aspx?action=mx

I re-added the record in Namecheap and waited for 15 mins and this time Cloudflare was happy. Maybe I just needed to wait for DNS replication a little longer?

Screenshot of cloudflare showing DNSSEC is all ok.

I tested my DNS serves with DNS Root Canary

DNS test with https://rootcanary.org/

I tested my site’s DNSSEC with https://zonemaster.iis.se/

Screenshot of https://zonemaster.iis.se/

Done

Skipping Cloudflare

I found that I can simply skip Cloudflare by enabling Premium DNS at Namecheap

Then enabling DNSSEC

Easy (totally independent of Cloudflare)

I hope this guide helps someone.

Please consider using my referral code and get $25 UpCloud VM credit for free.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.3 Namecheap DNSSEC

V1.2 ICANN DNSSEC link

V1.1 https://daniellbenway.net explainer video.

v1.0 Initial Post

Filed Under: CDN, Cloudflare, DNS, DNSSEC, Domain Tagged With: Cloudflare, DNS, dnssec, namecheap

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

Read Part 1, Part 2, Part 3 or Part 4

I used these commands to generate bonnie++ reports from the data in part 2

echo "<h1>Bonnie Results</h1>" > /www-data/bonnie.html
echo "<h2>Vultr (Sydney)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528177870,4G,,656,99,308954,68,113706,33,1200,92,188671,30,10237,251,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26067us,119ms,179ms,29139us,26069us,16118us,1463us,703us,880us,263us,119us,593us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>Digital Ocean (London)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528186398,4G,,699,99,778636,74,610414,60,1556,99,1405337,59,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17678us,10099us,17014us,7027us,3067us,2366us,1243us,376us,611us,108us,59us,181us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>UpCloud (Singapore)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528226703,4G,,1014,99,407179,24,366622,32,2137,99,451886,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11297us,54232us,16443us,4949us,44883us,1595us,264us,340us,561us,138us,66us,327us" | bon_csv2html >> /www-data/bonnie.html

Image of results here

Bonnie Results

Network Performace

IMHO Network Latency is the biggest impact on server performance, Read my old post on scalability on a budget here. I am in Australia an having a server in Singapore was too far away and latency was terrible.

Here is a non-scientific example of pinging a Vultr, Digital Ocean and UpCloud server in three different locations (and Google).

Ping Test

Test Ping Results

  • Vultr 132ms Ping Average (Sydney)
  • Digital Ocean 322ms Ping Average (London)
  • UpCloud 180ms Ping Average (Singapore)

Latency matters, run a https://www.webpagetest.org/ scan over your site to see why.

Adding https added almost 0.7 seconds to https communications in the past on Digital Ocean (a few thousand kilometres away). The longer the latency the longer HTTPS handshakes take.

SSL

Deploying a server to Singapore (in my experience) is bad if your visitors are in Australia. But deploying to other regions may be lower in cost though. It’s a trade-off.

Server Location

Deploy servers as close as you can to your customers is the best tip for performance.

Deploy serves close to your customers

Also, consider setting up Image Optimization and Image CDN plugins (guide here) in WordPress and using Cloudflare (guide here)

Benchmarking with SysBench

Install CPU Benchmark

sudo apt-get install sysbench

CPU Benchmark (Vultr/Sydney)

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          39.1700s
    total number of events:              10000
    total time taken by event execution: 39.1586
    per-request statistics:
         min:                                  2.90ms
         avg:                                  3.92ms
         max:                                 20.44ms
         approx.  95 percentile:               7.43ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   39.1586/0.00

39.15 seconds

CPU Benchmark (Digital Ocean/London)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          33.4382s
    total number of events:              10000
    total time taken by event execution: 33.4352
    per-request statistics:
         min:                                  3.24ms
         avg:                                  3.34ms
         max:                                  6.45ms
         approx.  95 percentile:               3.45ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   33.4352/0.00

33.43 sec

CPU Benchmark (UpCloud/Singapore)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1



Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          23.7809s
    total number of events:              10000
    total time taken by event execution: 23.7780
    per-request statistics:
         min:                                  2.35ms
         avg:                                  2.38ms
         max:                                  6.92ms
         approx.  95 percentile:               2.46ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   23.7780/0.00

23.77 sec

Surprisingly, 1st place in prime generation goes to UpCloud, then Digital Ocean then Vultr.  UpCloud has some good processors.

Processors:

  • UpCLoud (Singapore): Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz
  • Digital Ocean (London): Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
  • Vultr (Sydney): Virtual CPU a7769a6388d5 (Masked/Hidden CPU @ 2.40GHz)

(Lower is better)

prime benchmark results

(oops, typo in the chart should say Vultr)

Benchmark the file IO

Confirm free space

df -h /

Install Sysbench

sudo apt-get install sysbench

I had 10GB free on all servers (Vultr, Digitial Ocean and UpCloud) so I created a 10GB test file.

sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

128 files, 81920Kb each, 10240Mb total
Creating files for the test...

Now I can run the benchmark and use the pre-created text file.

sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run

SysBench description from the Ubuntu manpage.

“SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS parameters that are important for a system running a database under intensive load. The idea of this benchmark suite is to quickly get an impression about system performance without setting up complex database benchmarks or even without installing a database at all.”

SysBench Results (Vultr/Sydney)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  385920 Read, 257280 Write, 823266 Other = 1466466 Total
Read 5.8887Gb  Written 3.9258Gb  Total transferred 9.8145Gb  (33.5Mb/sec)
 2143.98 Requests/sec executed

Test execution summary:
    total time:                          300.0026s
    total number of events:              643200
    total time taken by event execution: 182.4249
    per-request statistics:
         min:                                  0.01ms
         avg:                                  0.28ms
         max:                                 18.12ms
         approx.  95 percentile:               0.55ms

Threads fairness:
    events (avg/stddev):           643200.0000/0.00
    execution time (avg/stddev):   182.4249/0.00

SysBench Results (Digital Ocean/London)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  944280 Read, 629520 Write, 2014432 Other = 3588232 Total
Read 14.409Gb  Written 9.6057Gb  Total transferred 24.014Gb  (81.968Mb/sec)
 5245.96 Requests/sec executed

Test execution summary:
    total time:                          300.0024s
    total number of events:              1573800
    total time taken by event execution: 160.5558
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.10ms
         max:                                 18.62ms
         approx.  95 percentile:               0.34ms

Threads fairness:
    events (avg/stddev):           1573800.0000/0.00
    execution time (avg/stddev):   160.5558/0.00

SysBench Results (UpCloud/Singapore)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  994320 Read, 662880 Write, 2121090 Other = 3778290 Total
Read 15.172Gb  Written 10.115Gb  Total transferred 25.287Gb  (86.312Mb/sec)
 5523.97 Requests/sec executed

Test execution summary:
    total time:                          300.0016s
    total number of events:              1657200
    total time taken by event execution: 107.4434
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.06ms
         max:                                 15.43ms
         approx.  95 percentile:               0.13ms

Threads fairness:
    events (avg/stddev):           1657200.0000/0.00
    execution time (avg/stddev):   107.4434/0.00

Comparison

Sysbench Results table

sysbench fileio results (text)

Read

  • Vultr (Sydney): 385,920
  • Digital Ocean (London): 944,280
  • UpCloud (Singapore): 944,320

Write

  • Vultr (Sydney): 823,266
  • Digital Ocean (London): 629,520
  • UpCloud (Singapore): 662,880

Other

  • Vultr (Sydney): 1,466,466
  • Digital Ocean (London): 3,588,232
  • UpCloud (Singapore): 2,121,090

Total Read Gb

  • Vultr (Sydney): 5.8887 Gb
  • Digital Ocean (London): 14.409 Gb
  • UpCloud (Singapore): 15.172 Gb

Total Written Gb

  • Vultr (Sydney): 3.9258 Gb
  • Digital Ocean (London): 9.6057 Gb
  • UpCloud (Singapore): 10.115 Gb

Total Transferred Gb

  • Vultr (Sydney): 9.8145 Gb
  • Digital Ocean (London): 24.014 Gb
  • UpCloud (Singapore): 25.287 Gb

Now I can remove test file io benchmark file

sysbench --test=fileio --file-total-size=2=10G cleanup
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Removing test files...

Confirm the test file has been deleted

df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G   16G   23G  41% /

Bonus: Benchmark MySQL (on my main server (Vultr) not on Digital Ocean and UpCLoud)

I tried to run a command

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=#################################### prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

FATAL: unable to connect to MySQL server, aborting...
FATAL: error 1049: Unknown database 'test'
FATAL: failed to connect to database server!

To fix the error I created a test table with Adminer (guide here).

Create Test Table

< Previous – Next >

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, ExactDN, Hosting, Performance, PHP, php72, Scalability, Scalable, Server, Speed, Storage, Ubuntu, UI, UpCloud, VM, Vultr Tagged With: and, can, comparing, Concurrent, cpu, digital ocean, Disk, etc, How, Latency, measure, on, Performance, ubuntu, UpCloud - Part 3 of 4, Users, vm, vultr, you

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 2 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 2 of 4

Read Part 1, Part 2, Part 3 or Part 4

Measure Disk Performance with Bonnie++

Installing Bonnie++ on Ubuntu

apt-get install bonnie++

Read this. post on using Bonnie++

Benchmark disk IO with DD and Bonnie++

Starting Bonnie++

bonnie++ -d /tmp -r 2048 -u username

Bonnie++ Readme.

Disk io with bonnie++ on Vultr/Sydney

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servername 4G 656 99 308954 68 113706 33 1200 92 188671 30 10237 251
Latency 26067us 119ms 179ms 29139us 26069us 16118us
Version 1.97 ------Sequential Create------ --------Random Create--------
servername -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 1463us 703us 880us 263us 119us 593us
1.97,1.97,servername,1,1528177870,4G,,656,99,308954,68,113706,33,1200,92,188671,30,10237,251,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26067us,119ms,179ms,29139us,26069us,16118us,1463us,703us,880us,263us,119us,593us

Disk io with bonnie++ on Digital Ocean/London

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servername 4G 699 99 778636 74 610414 60 1556 99 1405337 59 +++++ +++
Latency 17678us 10099us 17014us 7027us 3067us 2366us
Version 1.97 ------Sequential Create------ --------Random Create--------
servername -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 1243us 376us 611us 108us 59us 181us
1.97,1.97,servername,1,1528186398,4G,,699,99,778636,74,610414,60,1556,99,1405337,59,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17678us,10099us,17014us,7027us,3067us,2366us,1243us,376us,611us,108us,59us,181us

Disk io with bonnie++ on UpCloud/Singapore

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servername 4G 1014 99 407179 24 366622 32 2137 99 451886 17 +++++ +++
Latency 11297us 54232us 16443us 4949us 44883us 1595us
Version 1.97 ------Sequential Create------ --------Random Create--------
servername -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 264us 340us 561us 138us 66us 327us
1.97,1.97,servername,1,1528226703,4G,,1014,99,407179,24,366622,32,2137,99,451886,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11297us,54232us,16443us,4949us,44883us,1595us,264us,340us,561us,138us,66us,327us

Now read this site on how to make sense of this data

< Previous – Next >

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, Domain, ExactDN, HTTPS, Performance, PHP, php72, Scalability, Scalable, SEO, Ubuntu, UI, UpCloud, VM, Vultr, Wordpress Tagged With: and, can, comparing, Concurrent Users etc, cpu, Digital Ocean and UpCloud - Part 2 of 4, Disk, How, Latency, measure, on, Performance, ubuntu, vm, vultr, you

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4

June 2, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4. Update: I moved my domain to UpCloud.

Update (June 2018): I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

Upcloud Site Speed in GTMetrix

Comparing Digital Ocean/Vultr and UpCloud Disk IO

I have a number of guides on moving away from CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean (all in the search of extra performance) but how do you know when a server performance is ok apart from running GT Metrix and other external site benchmarking tools.

This post is split up as it was too long.

Read Part 1, Part 2, Part 3 or Part 4

Spoiler: It all depends on where your server is located and what you do with it (Tweaks will improve the performance).

P.S This is NOT a paid endorsement or conclusive test (just a quick benchmark/review).

What does your server do?

You need to know what your server does 24/7 and what resources the services need.

I use htop to view real-time and historical usage data for each process.

htop

Tweaking Advice

A friend gave me good advice re-tweaking a cheap host to get good performance

yeah but you are trying to get speed out of budget hosting. Good, fast, cheap, pick 2.

— Kerry Hoath (@khoath) June 2, 2018

I am not a fan of just throwing more money at a host and expecting better performance. Host have unique features and cons., there is no shortage of hosts or host cons.

How can you run synthetic benchmarks to determine comparable performance metrics?

WARNING: Comparing synthetic benchmarks can be far removed from real-world speeds. Benchmark results below were from 3 different servers I have on 3 different hosts in three different locations (the only thing the same was the use of Ubuntu 16.04 $5/m servers). These results are not scientific and should not be used to compare host providers. Benchmark runs were one-off (not averages over multiple timezones/days).

Disk Performance

Speaking of disk performance I noticed this the other day on the RunCloud blog. Faster than SSD (UpCloud)?

UpCloud Faster-than-SSD Cloud Hosting Server (Promo Code Inside)

Runcloud is a server management console that can interface with your domains (read my old review here).  I don’t use Runcloud but it is great for those who need a GUI to help manage VM via a dashboard. However, I prefer to know what is going on under the hood. I have investigated webmin in the past though.

Let’s do a quick IO benchmark test between UpCloud, Digital Ocean and Vultr on similarly low end $5/m servers,

Good advice on command line benchmarking tools from a friend.

depends on what sort of load you want to simulate. iozone is old but reliable. bonny might give you more figures you want.

— Kerry Hoath (@khoath) June 2, 2018


Installing iozone to test disk performance

I searched for a post on using iozone (Thanks thegeekstuff).  I will be reviewing the “Writer report” and “Reader report”. Read more about iozone here.

View the iozone page for how to break down results.

iozone results breakdown

(image snip from http://www.iozone.org/)

Install iozone on Ubuntu

sudo apt-get install iozone3

Run an iozone disk test and output the results to a spreadsheet.

iozone -a -b iozone.xls

Now let’s run a Read/Write test on Vultr/Digital Ocean and UpCloud. Multiple runs were not performed, this is not a scientific test (just a simple benchmark test (as is, ignoring sever load and local infrastructure/timezone load)).

iozone Benchmark results for Vultr “Read” (Sydney)

 “4”  “8”  “16”  “32”  “64”  “128”  “256”  “512”  “1024”  “2048”  “4096”  “8192”  “16384”
64 2133730 3363612 4274062 4564786 6421025
128 2248149 3536566 4135958 7082197 4135958 11720614
256 1884399 2699161 3879045 3667079 5971678 5687020 5687020
512 3140488 3736016 3684733 4262523 4610256 2638816 5067142 5684095
1024 1617808 1939207 3411938 3999762 4048778 4614246 3083680 5885083 6609617
2048 1926510 2569678 4423683 4997618 3937075 459605 2896324 3542524 4971585 4707314
4096 1701683 2151300 4209920 5001700 4751325 4869845 5389246 3647681 4928521 6207035 4347346
8192 2063424 2329346 3203763 2937280 3221485 3232699 3626431 3650706 3789200 4110603 3715045 4350542
16384 1738553 2778362 3397613 3679205 3693442 3171501 3524291 3393586 3004024 3552531 3456574 2693845 2488861
32768 0 0 0 0 2952894 3537153 3574875 3768155 4719613 3890280 3394995 2735222 2542914
65536 0 0 0 0 4057489 3610789 3619967 3800078 3275327 3591212 3607188 1770426 2826659
131072 0 0 0 0 3552270 1890742 5275167 3727339 3527607 1753893 3234736 2341111 1378601
262144 0 0 0 0 3798586 1302021 1491429 3712825 3228816 3757963 3715510 2592485 2481061
524288 0 0 0 0 2758756 2487923 3705741 1807328 2118309 3675988 3196367 3394330 2396842

iozone Benchmark results for Digital Ocean “Read” (London)

4  “8”  “16”  “32”  “64”  “128”  “256”   “512”   “1024”   “2048”  “4096”   “8192”   “16384”
64 4564786 7100397 9006179 10402178 12902017
128 4717434 7082197 8548124 9795896 10567140 10779307
256 4840911 7073132 8271916 9868433 10148242 10651598 1E+07
512 4742616 6909408 8140399 9304292 9638369 10044089 1E+07 10044089
1024 4249053 5917516 6208343 7537599 9300377 10454984 7E+06 7113161 9946527
2048 3885431 6967792 6603549 6845629 10401883 9808036 9E+06 7903836 9308497 7817519
4096 2506983 5953231 6263611 6953144 7774379 6225028 6E+06 8081580 7683972 8081580 8240513
8192 3665114 4850463 5479317 6141364 6277120 6108608 6E+06 6569983 5732541 7166033 6633402 5479317
16384 3673501 4828584 5416182 6187150 6614761 6298872 6E+06 6430310 5984033 6402750 6046159 4791883 3405527
32768 0 0 0 0 4692542 6140929 6E+06 6295642 5231224 6545707 5781108 4513475 3702577
65536 0 0 0 0 6315430 5830131 6E+06 6444695 6219125 6473838 5338595 4248118 3679324
131072 0 0 0 0 6130002 6461496 6E+06 5958068 5983423 6387547 6138078 3994888 3602079
262144 0 0 0 0 6456746 6323727 6E+06 6504146 6390176 6486151 6433963 3955165 3654188
524288 0 0 0 0 1667337 6381456 6E+06 6445708 6448714 6421071 5981200 4155185 3770740

iozone Benchmark results for UpCloud “Read” (Singapore)

 “4”  “8”  “16”  “32”  “64”  “128”  “256”  “512”  “1024”  “2048”  “4096”  “8192”  “16384”
64 6421025 6421025 10821524 12902017 15972885
128 4889281 6406138 9129573 10779307 14200794 14200794
256 5320671 3879045 10758322 8815202 10245071 12812277 12228612
512 4305250 5115422 8844453 8234036 7091952 8394979 7540170 10235583
1024 4339202 4762630 5821271 6163794 6819511 4674510 6479979 8183918 10230845
2048 4204968 5319484 5800851 5816563 6243566 6378005 5953632 6851089 7940367 8229438
4096 4526013 5556581 4817948 5404504 7301864 5759634 5810280 6007355 6919538 8620945 6281934
8192 4298295 5019093 5927357 6036702 6781341 6082655 5855636 6527546 6553692 6792065 6466126 4437634
16384 4282172 5849558 6313919 6635840 6741958 6657054 6423097 5536622 6558575 6442970 4527032 3784777 3901898
32768 0 0 0 0 5825460 5423408 6504198 6665385 6365329 6426343 5263076 3718605 3705971
65536 0 0 0 0 6908075 6623116 6493259 6609738 6311805 6483610 5489674 4035982 3561526
131072 0 0 0 0 5650180 5718949 2465429 5391253 3495911 5784844 5367408 3733490 3582175
262144 0 0 0 0 6814627 6691250 6189661 5906786 6081645 5799913 5247919 4121250 3637601
524288 0 0 0 0 6404764 6309263 5673979 5751609 6288245 6305103 5978680 3911984 3767116

iozone Benchmark results for Vultr “Write” (Sydney)

 “4”  “8”  “16”  “32”  “64”  “128”  “256”  “512”  “1024”  “2048”  “4096”  “8192”  “16384”
64 289322 532815 507625 429630 566551
128 398921 465304 434078 417212 669577 821147
256 530031 613985 820398 474937 891956 815414 370025
512 387576 754083 709019 819085 702295 609421 924123 496091
1024 297233 448522 716089 923488 854073 817340 1203137 1072453 601636
2048 408697 634655 695383 1358134 549657 1295458 821154 797520 964207 258493
4096 236150 433804 1215774 1245025 820832 809958 1371339 914269 921083 1004682 1481431
8192 611113 666677 806286 715219 779825 824294 875947 870091 1046378 791192 1023592 453248
16384 435454 706149 718313 845499 893495 888068 812778 842885 820591 941120 839610 862672 406590
32768 0 0 0 0 465196 786067 938881 627294 890917 968147 872369 871329 842843
65536 0 0 0 0 515057 790172 937568 915601 897235 867197 907562 852002 743856
131072 0 0 0 0 501091 480492 813147 870886 880239 805333 684630 1117578 633185
262144 0 0 0 0 387126 323185 323656 473258 405744 369599 422554 468992 453563
524288 0 0 0 0 325588 380450 392965 451608 303255 355148 386250 432054 416512

iozone Benchmark results for Digital Ocean “Write” (London)

 “4”  “8”  “16”  “32”  “64”  “128”  “256”  “512”  “1024”  “2048”  “4096”  “8192”  “16384”
64 831569 566551 1279447 1363961 1392258
128 652488 1319723 1421023 990891 1663139 1561553
256 1185399 1152323 1534342 1598292 1826695 1707589 1514860
512 1166599 1296159 1399189 1620980 1620980 1361920 1589779 1672748
1024 1079190 1321200 1584972 1917562 1592612 1701108 1718120 1462960 1643814
2048 1210394 1470172 1621719 1550584 1796378 1643753 1713598 1759581 1649117 1488257
4096 916513 1287575 1574718 1406594 1742237 1734148 1652418 1583280 1599346 1661045 1533532
8192 1109745 1318748 1178567 1544201 1502340 1371492 1466747 1499521 1479759 1564878 1291292 1347609
16384 1106205 1282084 1374037 1503649 1429398 1461407 1496119 1578132 1547289 1333431 1203371 1198815 1501316
32768 0 0 0 0 1270914 1406589 1513114 1468226 1558303 1552038 1516336 1443280 1440360
65536 0 0 0 0 1319322 1327984 1311504 1411955 1266988 1359645 1386446 1347092 1368295
131072 0 0 0 0 1100658 1229326 1227197 1318631 1265552 1233306 1227747 1237896 1233502
262144 0 0 0 0 1167160 1064078 1155828 1185185 1086152 1193673 1080872 1062611 1141960
524288 0 0 0 0 977835 1124816 1052757 1219183 1128972 1140177 1091954 1141635 1132063

iozone Benchmark results for UpCloud “Write” (Singapore)

 “8”  “16”  “32”  “64”  “128”  “256”  “512”  “1024”  “2048”  “4096”  “8192”  “16384”
64 1143223 1255511 1562436 1452528 1279447
128 1451764 1406136 1543594 1504659 1852520 1749872
256 1642294 1829808 1970871 1855098 1802167 1952947 2000242
512 1537424 1854787 1801873 2294796 1983258 2124526 1895721 1417662
1024 1434138 1553442 1609925 1931359 2098375 2044438 1872419 1768345 1892218
2048 1562145 1901771 1817281 1848169 1967097 1296240 2267786 2081497 1915768 2007554
4096 1625372 1966378 1924741 1342092 1950306 2078175 1914873 1459656 1995152 2102849 1326855
8192 1444062 1808330 1956503 1924397 2127300 2042328 2135630 1986478 2062557 2061319 1337016 1812049
16384 1667066 1820248 1898495 2051339 2012530 2111080 2119806 1491217 2060875 1974254 1934789 1815823 1921911
32768 0 0 0 0 2057506 1454537 2075621 2070899 1869795 2052896 1892347 1855382 1873440
65536 0 0 0 0 2067127 2077673 2088994 2179809 2087471 2099108 1904723 1642505 1832204
131072 0 0 0 0 1234663 1824959 1304340 1775514 1287481 1560379 1631992 1085609 1675467
262144 0 0 0 0 685774 808487 823824 662524 681762 548308 814946 645663 732176
524288 0 0 0 0 547296 517384 503422 521173 538714 518429 528950 529593 512944

Here is my quick unscientific take on a one-pass benchmark results above.

Vultr (Read) Vultr (Write) Digital Ocean (Write) UpCloud (Read) UpCloud (Write)

These results need some decoding.

Next >>

Read Part 1, Part 2, Part 3 or Part 4

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Update (June 2018): I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

Upcloud Site Speed in GTMetrix

Revision History

v1.2 added the fact that I Moved to UpCloud.

v1.1 Re ran iozone -a -b iozone.xls on all servers.

v1.0 Initial post

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, ExactDN, HTTPS, NGINX, Performance, PHP, php72, Scalability, Scalable, Storage, Ubuntu, UpCloud, VM, Vultr, Wordpress Tagged With: and, comparing, Concurrent, cpu, Digital, Disk Latency, etc, Measuring, Ocean, on, Performance, ubuntu, UpCloud, Users, vm, vultr

Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap

March 13, 2018 by Simon

This guide will show how you can set up a website to use Cloudflare on a VM hosted on Vultr and Namecheap

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. This post will show how to let Cloudflare handle the DNS for the domain.

Update 2018: For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Snip from here “Cloudflare’s enterprise-class web application firewall (WAF) protects your Internet property from common vulnerabilities like SQL injection attacks, cross-site scripting, and cross-site forgery requests with no changes to your existing infrastructure.”

Buy a Domain 

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Cloudflare Benefits (Free Plan)

  • DDoS Attack Protection (Huge network to absorb attacks DDoS attacks over 600Gbps are no problem for our 15 Tbps networks)
  • Global CDN
  • Shared SSL certificate (I disabled this and opted to use my own)
  • Access to audit logs
  • 3 page rules (maximum)

View paid plan options here.

Cloudflare CDN map

Cloudflare CDN says it can load assets up to 2x faster, 60% less bandwidth from your servers by delivering assets from 127 data centres.

Cloudflare Global Network

Setup

You will need to sign up at cloudflare.com

Cloudflare

After you create an account you will be prompted to add a siteAdd SiteCloudflare will pull your public DNS records to import.

Query DNS

You will be prompted to select a plan (I selected free)

Plan Select

Verify DNS settings to import.

DNS Import

You will now be asked to change your DNS nameservers with your domain reseller

DNS Nameservers

TIP: If you have an SSL cert (e.g Lets Encrypt) already setup head to the crypto section and select ” Full (Strict)” to prevent ERR_TOO_MANY_REDIRECTS errors.

Strict SSL

Cloudflare UI

I asked Twitter if they could kindly load my site so I could see if Cloudflare dashboard/stats were loading.

Could I kindly ask if you are reading this that you visit https://t.co/9x5TFARLCt, I am writing a @Cloudflare blog post and need to screenshot stats. Thanks in advance

— Simon Fearby (Developer) (@FearbySoftware) March 13, 2018

The Cloudflare CTO responded.  🙂

Sure thing 🙂

— John Graham-Cumming (@jgrahamc) March 13, 2018

Confirm Cloudflare link to a domain from the OSX Comand line

host -t NS fearby.com
fearby.com name server dane.ns.cloudflare.com.
fearby.com name server nora.ns.cloudflare.com.

Caching Rule

I set up the following caching rule to cache everything for 8 hours instead of WordPress pages

Page Rules

“fearby.com.com/wp-*” Cache level: Bypass

“fearby.com.com/wp-admin/post.php*” Cache level: Bypass

“fearby.com/*” Cache Everything, Edge Cache TTL: 8 Hours

Cache Results

Cache appears to be sitting at 50% after 12 hours.  having cache os dynamic pages out there is ok unless I need to fix a typo, then I need to login to Cloudflare and clear the cache manually (or wait 8 hours)

Performance after a few hours

DNS times in gtmetrix have now fallen to a sub 200ms (Y Slow is now a respectable A, it was a C before).  I just need to wait for caching and minification to kick in.

DNS Improved

webpagetest.org results are awesome

See here: https://www.webpagetest.org/result/180314_PB_7660dfbe65d56b94a60d7a604ca250b3/

  • Load Time: 1.80s
  • First Byte 0.176s
  • Start Render 1.200s

webpagetest

Google Page Speed Insights Report

Mobile: 78/100

Desktop: 87/100

Check with https://developers.google.com/speed/pagespeed/insights/

Update 24th March 2018 Attacked?

I noticed a spike in and traffic (incoming and threats) on the 24th of March 2018.

I logged into Cloudflare on my mobile device and turned on Under Attack Mode.

Under Attack Flow

Cloudflare was now adding a delay screen in the middle of my initial page load. Read more here.  A few hours after the Attach started it was over.

After the Attack

I looked at the bandwidth and found no increase in traffic from my initial host VM. Nice.

cloudflare-attack-001

Thanks, Cloudflare.

Cloudflare Pros

  • Enabling Attack mode was simple.
  • Soaked up an attack.
  • Free Tier
  • Many Reports
  • Option to force HTTPS over HTTP
  • Option to ban/challenge suspicious IP’s and set challenge timeframes.
  • Ability to setup IP firewall rules and Application Firewalls.
  • User-agent blocking
  • Lockdown URL’s to IP’s (pro feature)
  • Option to minify Javascript, CSS and HTML
  • Option to accelerate mobile links
  • Brotli compression on assets served.
  • Optio to enable BETA Rocket loader for Javascript performance tweaks.
  • Run Javascript service workers from the 120+ CDN’s
  • Page/URL rules o perform custom actions (redirects, skip cache, Encryption etc)
  • HTTP/2 on, IPV6 ON
  • Option to setup load balancing/failover
  • CTO of Cloudflare responded in Twitter 🙂
  • Option to enable rate limiting (charged at 10,000 hits for $0.05c)
  • Option to block countries (pro feature)
  • Option to install apps in Cloudflare like(Goole Analytics,

Cloudflare Cons

  • No more logging into NameCheap to perform DNS management (I now goto Cloudflare, Namecheap are awesome).
  • Cloudflare Support was slow/confusing (I ended up figuring out the redirect problem myself).
  • Some sort of verify Cloudflare Setup/DNS/CDN access would be nice. After I set this up my gtmetrix load times were the same and I was not sure if DNS needs to replicate? Changing minify settings in Cloudflare did not seem to happen.
  • WordPress draft posts are being cached even though page riles block wp-admin page caching.
  • Would be nice to have ad automatic Under Attack mode
  • Now all sub-domains were transferred in the setup ( id did not know for weeks)

Cloudflare status

Check out https://www.cloudflarestatus.com/ for status updates.

Don’t forget to install the CloudFlare Plugin for WordPress if you use WordPress.

More Reading

Check out my OWASP Zap and Kali Linux self-application Penetration testing posts.

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.8 host Command from the OSX CLI

v1.7 Subdomain error

v1.6 Cloudflare Attack

v1.5 WordPress Plugin

v1.4 More Reading

v1.3 added WAF snip

v1.2 Added Google Page Speed Insights and webpage rest results

v1.1 Added Y-Slow

v1.0 Initial post

Filed Under: Analytics, App, Cache, CDN, Cloud, Cloudflare, DNS, Domain, Hosting, LetsEncrypt, Marketing, Secure, Security, SEO, Server, VM, Vultr, Website, Wordpress Tagged With: a, and, Cloudflare, hosted, namecheap, on, Setting, to, up, use, vm, vultr, website

Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin

December 2, 2017 by Simon

Below is my quick blog posts on using the EWWW IO ExactDN CDN plugin in WordPress to set up an ExactDN (Global Dis.cotribution Network (CDN)) to distribute images to my site’s visitors and shrink (optimize) images in posts.

I have blogged before on speeding up WordPress that has involved moving servers away from CPanel domains to self-managed servers (e.g on Digital Ocean or Vultr), using Lazy Load image plugins like BJ Lazy Load, Optimize images automatically in WordPress with EWWW.io and scaling and moving servers closer to your customers.

For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Today I am going to set up the https://www.ewww.io/resize/ ExactDN (CDN) delivery network. I am paying for this myself (and this is an unbiased review).

FYI: I use Ubuntu servers and not Windows.

Know your starting point. 

“If you fail to plan, you are planning to fail!” – Benjamin Franklin

In my original post about Speeding up WordPress, I used the webpagetest.org site to test my sites response times, I was getting an embarrassing 21 seconds load time and 6 second first-byte time. I have worked to speed up WordPress by moving WordPress to a self-managed server (away from CPanel), used the BJ Lazy Load plugin and the awesome ewww.io image optimize plugin, now I get about 4-5 seconds.

I am hoping adding a CDN will make things faster. My blog is delivering over two-thirds images, perfect for a CDN, this is why I am trying this out.

Image type cdn

TIP: Check where your customers/readers are located, and how many are New versus Returning customers? Do you need a CDN to deliver content (Images) that are closer to your customers/readers or do you need to move your web server somewhere else?  The more you know the more you can help them. Worst case you will be supporting a positive experience (and potentially turning a one time visitor into a returning visitor).

I looked at my Google Analytics data to see where my visitors are. Whether good or bad, they are all over the world (Hello)?

World

Other data is available in Google Analytics.  I can see the last few years growth is growing and I am getting more returning visitors, now is the time to ensure my site is ready for more traffic and returning visitors.

Data

Note: The fall in traffic in the Audience overview (right-hand side of the left image) is the unfinished month (not a reader fall off).

Personally, I set a goal to have a high page bounce rate of 90% be way lower (at present I am at about 80% and falling (good) and my page read time has gone from 40 seconds to 1 minute 40.  Every bit you can do will help create a positive experience and help your visitors. I can see from the data above the content is being read, I am building returning visitors and they are geographically spread out. A CDN will be great. After you know where your visitors are it is good to know the times of day that your visitors are hitting your site. Lucky for me it is spread evenly over a 24 hour period.

Data Quantity

FYI: My servers outgoing data (last 30 days), not huge but BJ LazyLoad and image optimization may be helping.

Outbound

Traffic Forecast

Do you know the forecasted growth of your website?

Site Growth

Measure Before Optimizing

Before I started to optimize my WordPress site (hosted on a shared CPanel server) I had the following Web Page Test score. I tested from Singapore as that’s was where my server was originally (and the closest to me).

TIP: Read more about Performance, First Byte, Start Render and Complete scores at my blog post here.

Web Page Performace Test Results (Before Optimizations):

  • Load Time: 23.672s
  • First Byte: 6.743s
  • Start Render: 11.8300s
  • Speed Index: 15024
  • Requests: 132/164 (Document complete v Fully Loaded)
  • Bytes: 3,346KB/3,454KB  (Document complete v Fully Loaded)

Quick Web Page Performace Score Card (Before Optimizations):

  • First Byte: F (I should have captured the subscores)
  • Keep Alive Enabled: F (I should have captured the subscores)
  • Compress Transfer: F (I should have captured the subscores)
  • Compress Images: A (I should have captured the subscores)
  • Cache Static Content: X (I should have captured the subscores)
  • Effective use of CDN: X (I should have captured the subscores)

My initial scores were bad across the range of tests (before optimisations). On the upside, I was manually compressing images with a tool on my desktop before uploading images and this showed an “A” but this scorecard overall was really bad.

Here are the results after Quick Optimizations (EWWW.io image compress, moved the server, reorganizing the site and Lazy Load Images)

Now I get these results after speeding up my site (after using the EWWW.io image resizing, reorganizing the site, minifying, lazy load images, moving servers etc.).

Web Page Performace Test Results (After Simple Optimizations):

  • Load Time: 8.823s (down 14.849s)
  • First Byte: 3.5533s (down 3.1897s)
  • Start Render: 5,4800s (down 6.35s)
  • Speed Index: 5594 (down 9430)
  • Requests: 73/76 (Document complete v Fully Loaded) (down 59/88)
  • Bytes: 848KB/855KB  (Document complete v Fully Loaded) (down 2,498KB/2,599KB)

Quick Web Page Performace Score Card (After Simple Optimizations):

  • First Byte: F (I should have captured the subscores)
  • Keep Alive Enabled: A (I should have captured the subscores)
  • Compress Transfer: A (I should have captured the subscores)
  • Compress Images: A (I should have captured the subscores)
  • Cache Static Content: B (I should have captured the subscores)
  • Effective Use of CDN: X  (I should have captured the subscores)

Tested my pagespeed tests

Even at 4 seconds web page “First Byte“, this is considered not good. My brain says I want sub 1 second, I doubt this is achievable with WordPress over thousands of miles away with SSL (read here about scalability).

I know https and non-geographically favourable servers add half a second to data. SSL will add processing overheads and latency period. If you only want speed don’t setup SSL but if you want SEO and security then setup SSL.

locationlocationlocation

WebPageTest.org test reveals there is no effective use of Content Delivery Networks (CDN) on fearby.com (that’s why I am about to install EWWW.io ExactDN).

ExactDN (Content Delivery Network)

I did try and set up a number of caching and CDN plugins in the past (e.g Max CDN, W3 Total Cache, WP Fastest Cache, Cache Enable, WP Rocket, WP Super Cache, etc.) but they either made results worse or were impossible to set up.

Now that EWWW.io has a CDN let’s give that a go.

What is EWWW.io’s ExactDN?

You can read more about EWWW.io’s two-pronged approach to delivering one plugin to A) “compress images” and B) “add a Content Delivery Network (CDN)” here: https://ewww.io/resize/

What is ExactDN

Ensure you read up about EWWW.io’s ExactDN here: https://ewww.io/resize/ . If you have not used EWWW.io check out my review of the EWWW.io image compression plugin here first.

Pre Signup

Purchasing and Installing ExactDN Exact DN

Login to your EWWW.io account (or signup then log in).

Signin

FYI: If you have used the Optimise images automatically in WordPress plugin from EWWW.io plugin before, then you will see past purchases here.

Signed in

Now you can go back to the EWWW.io ExactDN product page (https://ewww.io/resize/) and purchase a subscription (Make sure you are logged in with an EWWW.io account before you purchase ExactDN).

Pre purchase

I purchased an ExactDN monthly subscription for $9.00 monthly (with a $1 signup fee for the cloud compression service).

Order

Purchase confirmation screen.

Order

Post-purchase, I was advised to find my “Site Address (URL)” in WordPress and add it to EWWW.io Manage Sites screen.

Add site to the plugin

I now noticed I had a CDN for my domain ( fearby-com.exactdn.com ). Nice.

CDN Site Created

EWWW.io said to tick the CDN option in the Image Resize area of the EWWW.io plugin. But before I do that I  will update WordPress Core and WordPress Plugins before enabling the ExactDN as they are out of date.

Update

TIP: I update WordPress Core and WordPress plugins via command line (my guide here).

Backup WordPress (just in case)

It is always a good idea to backup your website, I logged into my Ubuntu server and checked the size of my site before backing it up.

du -hs ./www
513M    ./www

I copied the website folder (from “/www/” to “/www-backup”)

cp -rTv /www/ /www-backup

I confirmed the copied folder size (same, good)

du -hs ./www-backup/
513M    ./www-backup/

I dumped all databases with the MySQL dump command

mysqldump --all-databases > /my-sql-backup.sql -u root -p
> Enter password: ##############################

ls my-sql*.sql -al
-rw-r--r-- 1 removed removed 75605938 Dec  1 21:11 my-sql-backup.sql

While I was there I compress the SQL backup files (text files)

tar -czvf my-sql-backup.sql.tar.gz my-sql-backup.sql

Checking the CDN Status

I checked the DNS replication for fearby-com.exactdn.com with this DNS checker. It’s setup and ready to go across the globe 🙂

A quick CNAME check reveals its upstream provider is fearbycom-8ba8.kxcdn.com. Keycdn.com have been around since 2012. It is great that EWWW.io has paired with keycdn and included their image compression magic in a single WordPress plugin.

Configuring the EWWW.io plugin in WordPress.

Before you enable the CDN below do check the https://www.whatsmydns.net/ and see if your CDN has replicated around the world (Australia can be a bit slow at DNS replication from time to time). Do wait at least 10 minutes before proceeding.

Click the settings in your EWWW Image Optimizer Plugin.

Configure

Now click the Resize Settings tab and click Enable CDN in the EWWW Image Optimizer plugin screen.

Enable Exact DN

Don’t forget to click Save Changes

Save

Testing the CDN

https://www.webpagetest.org NOW reports that my site is using a CDN 🙂

FYI: I have always used webpage test from Singapore and I have done the same here to compare apples to apples.  Singapore is not a good test location in Australia (my server is in Australia) and it is 200ms away and the added layer of SSL adds latency to the connection (read here). This is a real-world test though (worst case).

CDN's Used:
fearby.com : 
fearby-com.exactdn.com : KeyCDN
fonts.googleapis.com : Google
fonts.gstatic.com : Google
static.addtoany.com : Cloudflare
www.googletagmanager.com : Google
pagead2.googlesyndication.com : Google
www.youtube.com : Google
adservice.google.com.sg : Google
adservice.google.com : Google
googleads.g.doubleclick.net : Google
www.google-analytics.com : Google
s.ytimg.com : Google
stats.g.doubleclick.net : Google
fearby-com.disqus.com : Fastly
ssl.google-analytics.com : Google

FYI: https://www.webpagetest.org does want other (all) static content to be on a CDN to give a higher score than 42 out 100, sorry I did not get a subscore before enabling the CDN.

I was expecting a green A here for CDN but Webpage Test has given me more items to investigate to ensure WordPress plugins are also fast (minify or move to CDN). It appears WordPress includes are my next target for optimizing.

Web Page Test indicates the following WordPress assets should also be in CDN.

FAILED - https://fearby.com/wp-content/plugins/youtube-embed-plus/styles/ytprefs.min.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/simple-social-icons/svgxuse.js?ver=1.1.21
FAILED - https://fearby.com/wp-content/plugins/youtube-embed-plus/scripts/ytprefs.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/add-to-any/addtoany.min.js?ver=1.0
FAILED - https://fearby.com/wp-content/plugins/crayon-syntax-highlighter/js/min/crayon.min.js?ver=_2.7.2_beta
FAILED - https://fearby.com/wp-includes/js/jquery/jquery-migrate.min.js?ver=1.4.1
FAILED - https://fearby.com/wp-includes/js/jquery/jquery.js?ver=1.12.4
FAILED - https://fearby.com/wp-content/plugins/contact-form-7/includes/js/scripts.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/themes/genesis/lib/js/menu/superfish.args.js?ver=2.5.3
FAILED - https://fearby.com/wp-content/plugins/crayon-syntax-highlighter/css/min/crayon.min.css?ver=_2.7.2_beta
FAILED - https://fearby.com/wp-content/themes/genesis/lib/js/skip-links.js?ver=2.5.3
FAILED - https://fearby.com/wp-includes/js/comment-reply.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/hoverIntent.min.js?ver=1.8.1
FAILED - https://fearby.com/wp-content/themes/genesis/lib/js/menu/superfish.js?ver=1.7.5
FAILED - https://fearby.com/wp-includes/js/jquery/ui/tabs.min.js?ver=1.11.4
FAILED - https://fearby.com/wp-content/themes/news-pro/js/global.js?ver=3.2.2
FAILED - https://fearby.com/wp-content/themes/news-pro/js/responsive-menus.min.js?ver=3.2.2
FAILED - https://fearby.com/wp-includes/js/jquery/ui/core.min.js?ver=1.11.4
FAILED - https://fearby.com/wp-content/themes/news-pro/style.css?ver=3.2.2
FAILED - https://fearby.com/wp-includes/js/jquery/ui/widget.min.js?ver=1.11.4
FAILED - https://fearby.com/wp-content/themes/news-pro/js/jquery.matchHeight.min.js?ver=3.2.2
FAILED - https://fearby.com/wp-content/plugins/contact-form-7/includes/css/styles.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/youtube-embed-plus/scripts/fitvids.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/wp-embed.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/bj-lazy-load/js/bj-lazy-load.min.js?ver=2
FAILED - https://fearby.com/wp-content/plugins/disqus-comment-system/media/js/count.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/wp-emoji-release.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/wp-seo-html-sitemap/style.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/simple-social-icons/symbol-defs.svg
FAILED - https://fearby.com/wp-includes/css/dashicons.min.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/themes/news-pro/images/favicon.ico
FAILED - https://fearby.com/wp-content/plugins/genesis-tabs/style.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/simple-social-icons/css/style.css?ver=2.0.1
FAILED - https://fearby.com/wp-content/plugins/add-to-any/addtoany.min.css?ver=1.14

It would be a huge effort trying to read and keep static plugin and theme related files in a CDN. I’ll ask the EWWW.io developer to see if this is possible in a future version (that would be nice). The developer did promptly point me here to opt into using the CDN to deliver CSS, JS etc.

Is that it? It can’t be that simple!

I can load my site at https://fearby.com and my images load from https://fearby-com.exactdn.com 🙂 I am impressed, blog posts are now loading images from the CDN network and I did not have to edit posts. I did have to subscribe to ExactDN and tick a checkbox in WordPress though.

Here is a sample image (my largest) from this blog post.

https://fearby-com.exactdn.com/wp-content/uploads/2017/10/Infographic-So-you-have-an-idea-for-an-app-v1-3.jpg?strip=all&quality=60&ssl=1

fearby.com WebPAGE TEST Results – Singapore

Web Page Performace Test Results – Singapore (After Setting up the EWWW.io ExactDN):

  • Load Time: 4.177s (down 4.646s from previous optimizations ((My Original load times were 23.672s))
  • First Byte: 2.240s (down 1.3133s from previous optimizations (My Original First Byte: 6.743s))
  • Start Render: 2.800s (down 2.68s from previous optimizations (My Original Start Render: 11.8300s))
  • Speed Index: 3009 (down 2585 from previous optimizations (Speed Index: 15024))
  • Requests: 67/68 (Document complete v Fully Loaded) 
  • Bytes: 535KB/538KB  (Document complete v Fully Loaded)

Quick Web Page Performace Score Card – Singapore (After Setting up the EWWW.io ExactDN):

  • First Byte: F (I should have captured the subscores). 
  • Keep Alive Enabled: A (I should have captured the subscores).
  • Compress Transfer: A (I should have captured the subscores).
  • Compress Images: A (I should have captured the subscores).
  • Cache Static Content: B (I should have captured the subscores).
  • Effective Use of CDN: X (I should have captured the subscores). plugins now need to be on a CDN.

The Web Page Test site does give detailed scores and recommendations if you scroll down or click the score car (A-F). Do read the recommendations and see what you may need to do next.  I am happy that I now have a CDN via EWWW.io. Clicking the first byte and CDN buttons at Web Page Test reveal sub-scores to allow you to see if you have made improvements, regrettably, I did not know of and capture sub-scores until after installing the CDN (I suggest you do).

Full Dump.

Details
First Byte Time (back-end processing): 0/100
2240 ms First Byte Time
ms Target First Byte Time

Use persistent connections (keep alive): 100/100

Use gzip compression for transferring compressable responses: 100/100
404.6 KB total in compressible text, target size = 404.6 KB - potential savings = 0.0 KB

Compress Images: 100/100
98.7 KB total in images, target size = 98.7 KB - potential savings = 0.0 KB

Use Progressive JPEGs: 100/100
97.4 KB of a possible 97.4 KB (100%) were from progressive JPEG images

Leverage browser caching of static assets: 88/100
FAILED - (No max-age or expires) - https://fearby-com.disqus.com/count.js
FAILED - (8.2 minutes) - https://www.google-analytics.com/analytics.js
FAILED - (15.0 minutes) - https://www.googletagmanager.com/gtag/js?id=UA-93963-1
FAILED - (60.0 minutes) - https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js
WARNING - (1.1 hours) - https://www.google-analytics.com/ga.js
WARNING - (12.0 hours) - https://pagead2.googlesyndication.com/pub-config/r20160913/ca-pub-9241521190070921.js
WARNING - (24.0 hours) - https://fonts.googleapis.com/css?family=Raleway%3A400%2C700&ver=3.2.2
WARNING - (2.0 days) - https://static.addtoany.com/menu/page.js
WARNING - (4.5 days) - https://s.ytimg.com/yts/jsbin/www-widgetapi-vflUJbESo/www-widgetapi.js

Use a CDN for all static assets: 46/100
FAILED - https://fearby.com/wp-content/plugins/simple-social-icons/css/style.css?ver=2.0.1
FAILED - https://fearby.com/wp-content/plugins/add-to-any/addtoany.min.css?ver=1.14
FAILED - https://fearby.com/wp-content/plugins/crayon-syntax-highlighter/css/min/crayon.min.css?ver=_2.7.2_beta
FAILED - https://fearby.com/wp-content/themes/news-pro/style.css?ver=3.2.2
FAILED - https://fearby.com/wp-content/plugins/contact-form-7/includes/css/styles.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/wp-seo-html-sitemap/style.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/css/dashicons.min.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/contact-form-7/includes/js/scripts.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/comment-reply.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/crayon-syntax-highlighter/js/min/crayon.min.js?ver=_2.7.2_beta
FAILED - https://fearby.com/wp-includes/js/hoverIntent.min.js?ver=1.8.1
FAILED - https://fearby.com/wp-content/themes/genesis/lib/js/menu/superfish.js?ver=1.7.5
FAILED - https://fearby.com/wp-content/themes/genesis/lib/js/menu/superfish.args.js?ver=2.5.3
FAILED - https://fearby.com/wp-content/themes/genesis/lib/js/skip-links.js?ver=2.5.3
FAILED - https://fearby.com/wp-content/themes/news-pro/js/jquery.matchHeight.min.js?ver=3.2.2
FAILED - https://fearby.com/wp-content/themes/news-pro/js/responsive-menus.min.js?ver=3.2.2
FAILED - https://fearby.com/wp-includes/js/jquery/ui/tabs.min.js?ver=1.11.4
FAILED - https://fearby.com/wp-content/plugins/add-to-any/addtoany.min.js?ver=1.0
FAILED - https://fearby.com/wp-content/themes/news-pro/js/global.js?ver=3.2.2
FAILED - https://fearby.com/wp-includes/js/jquery/ui/core.min.js?ver=1.11.4
FAILED - https://fearby.com/wp-includes/js/jquery/ui/widget.min.js?ver=1.11.4
FAILED - https://fearby.com/wp-content/plugins/bj-lazy-load/js/bj-lazy-load.min.js?ver=2
FAILED - https://fearby.com/wp-content/plugins/youtube-embed-plus/scripts/fitvids.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/wp-embed.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/disqus-comment-system/media/js/count.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/youtube-embed-plus/styles/ytprefs.min.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/wp-emoji-release.min.js?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-content/plugins/simple-social-icons/symbol-defs.svg
FAILED - https://fearby.com/wp-content/plugins/genesis-tabs/style.css?ver=230d722825ddde2688088d563a906075
FAILED - https://fearby.com/wp-includes/js/jquery/jquery.js?ver=1.12.4
FAILED - https://fearby.com/wp-content/themes/news-pro/images/favicon.ico
FAILED - https://fearby.com/wp-includes/js/jquery/jquery-migrate.min.js?ver=1.4.1
FAILED - https://fearby.com/wp-content/plugins/simple-social-icons/svgxuse.js?ver=1.1.21
FAILED - https://fearby.com/wp-content/plugins/youtube-embed-plus/scripts/ytprefs.min.js?ver=230d722825ddde2688088d563a906075

CDN's Used:
fearby.com : 
fonts.googleapis.com : Google
fonts.gstatic.com : Google
www.googletagmanager.com : Google
static.addtoany.com : Cloudflare
fearby-com.exactdn.com : KeyCDN
pagead2.googlesyndication.com : Google
www.youtube.com : Google
www.google-analytics.com : Google
adservice.google.com : Google
adservice.google.com.sg : Google
googleads.g.doubleclick.net : Google
s.ytimg.com : Google
fearby-com.disqus.com : Fastly
stats.g.doubleclick.net : Google

GT Metrix Page Speed Score

https://gtmetrix.com is giving a good score across the board (86%). It hints I should optimize Javascript files and “Remove query strings in static files” as some proxy servers do not cache URLs with “?” in them. That is (fortunately) not a problem with ExactDN, as the servers are configured to properly handle query strings.

GTMetrics

Gtmetrix.com does give some optimisation tips too (However, it does report low CDN optimizations if a single file is not delivered over a CDN).

I do like Gtmetrix.com email reports, you can see if your site performance is degrading.

GTMetrix Summary

Page Speed Insights

I am now looking at the Google PageSpeed Insights test for things to fix next. I think I can tweak my NGINX a little (adding caching). Read more about Google Page Speed Insights here.

fyi: My Google Page Speed Insights score (desktop) along with another local big corporate site I tested.

Page Insight

I am happy with 82 🙂

It appears if you want to get 100% you need to get..

  • Initial Response under 100ms
  • Animate, produce frame in under 10ms
  • Idle maximize idle time
  • Deliver all content in under 1000ms

Getting More from ExactDN by caching CSS and JS files.

I added the following to my /www/wp-config.php file as mentioned here: http://docs.ewww.io/article/47-getting-more-from-exactdn This will serve more resources from the CDN

define( 'EXACTDN_ALL_THE_THINGS', true );

I refreshed my site in a browser and now CSS, JS and fonts are loaded from the CDN too.

All

To reduce files served from my website I re-enabled the Fast Velocity Minify plugin in WordPress and pointed it at my CDN (https://fearby-com.exactdn.com/)

cdn-minify

Now I am getting a GTMatrix score of Page Speed Score = 93% (A) and YSlow Score  = 74% (C) and a 2.6-second load and a much lower 35 requests (post minification and pointing the minified files to the CDN).

WOW.

Minifiy

And WebPageTest.org is reporting Effective use of CDN 🙂 Awesome.

Effective use of CDN

I think I’m done (ExactDN CDN and the image optimiser and other tweaks have worked its magic).

How do I compare to other sites?

Given Apple, Microsoft and NBC speed scores are worse than mine I’m happy for now.  🙂

GTMetricx Graphs

GT Metrics graphs show the improvement, YSlow report does indicate I can reduce the website DOM elements to speed things up and tweak plugins but I’d rather keep my design for SEO and not play with plugins or they might break.

I could tweak the server side (NGINX/MySQL/Cache or DNS) but I don’t need too.

Conclusion

I still can’t believe setting up a CDN is a one-click solution and adding a wp-config.php lin (after you subscribe). Best of all the BJ Lazy Load plugin still works. I am very happy with the EWWW.io ExactDN. I now have a CDN and it has lowered my First Byte time, Start Render Times, Speed Index time (on more than just the front page) and all I did was subscribe and tick a checkbox.

It is nice that EWWW.io have not charged for Data delivered from the CDN network on top of a monthly subscription at this stage but they have left it open in the terms and conditions. It is important to ensure a service can sustain itself, so this is a good sign. FYI: my favorite Agile toolkit (Atlaz.io) that I reviewed here closed shop today so it is wise for aaS shops to plan ahead. I know how hard it can be to spin up servers and stuff and allocate times to keep them running. Good luck EWWW.io!

Also, I now have a clear set of steps (below) to resolve other non-optimized assets that are outside of the CDN.

Full Report

Thanks to EWWW.io and Web Page Test for helping make my site faster.

I doubt WordPress and my “down under”  server location can get me to under 1000ms but I will try.

Read on here to see how Cloudflare can increase your site’s performance.

Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap

DNS Improved

Don’t forget your sites performance/SEO and security.

Website

Update November 2018

I have a much faster loading website after moving it to a new host, read my guide here.

GTMetrix After YSlow

Donate and make this blog better

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.0 November 2018 Update

v1.9 added Cloudflare, SEO, Google Page Speed Insights test and future optimization, page insight speed image, added GTMetrics comments, growth image, gtmetrix.com reports (typos are free), updated change is domain to .com, added EXACTDN_ALL_THE_THINGS, 93 page speed, minify, compare, graph, fixed typos, tidied up the conclusion.

Filed Under: CDN, ExactDN Tagged With: and, cdn, compression, ewww.io, ExactDN, image, plugin, Speeding, the, up, with, wordpress

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT