• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Hosting

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.

December 22, 2020 by Simon

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance. Here is what I did to set up a complete Ubuntu 18.04 system (NGINX, PHP, MySQL, WordPress etc). This is not a paid review (just me documenting my steps over 2 days).

Background (CPanel hosts)

In 1999 I hosted my first domain (www.fearby.com) on a host in Seattle (for $10 USD a month), the host used CPanel and all was good.  After a decade I was using the domain more for online development and the website was now too slow (I think I was on dial-up or ADSL 1 at the time). I moved my domain to an Australian host (for $25 a month).

After 8 years the domain host was sold and performance remained mediocre. After another year the new host was sold again and performance was terrible.

I started receiving Resource Limit Is Reached warnings (basically this was a plot by the new CPanel host to say “Pay us more and this message will go away”).

Page load times were near 30 seconds.

cpenal_usage_exceeded

The straw that broke the camel’s back was their demand of $150/year for a dodgy SSL certificate.

I needed to move to a self-managed server where I was in control.

Buying a Domain Name

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Self Managed Server

I found a good web IDE ( http://www.c9.io/ ) that allowed me to connect to a cloud VM.  C9 allowed me to open many files and terminal windows and reconnect to them later. Don’t get excited, though, as AWS has purchased C9 and it’s not the same.

C9 IDE

C9 IDE

I spun up a Digital Ocean Server at the closest data centre in Singapore. Here was my setup guide creating a Digital Ocean VM, connecting to it with C9 and configuring it. I moved my email to G Suite and moved my WordPress to Digital Ocean (other guides here and here).

I was happy since I could now send emails via CLI/code, set up free SSL certs, add second domain email to G Suite and Secure G Suite. No more usage limit errors either.

Self-managing servers require more work but it is more rewarding (flexible, faster and cheaper).  Page load times were now near 20 seconds (10-second improvement).

Latency Issue

Over 6 months, performance on Digital Ocean (in Singapore) from Australia started to drop (mentioned here).  I tried upgrading the memory but that did not help (latency was king).

Moved the website to Australia

I moved my domain to Vultr in Australia (guide here and here). All was good for a year until traffic growth started to increase.

Blog Growth

I tried upgrading the memory on Vultr and I setup PHP child workers, set up Cloudflare.

GT Metrix scores were about a “B” and Google Page Speed Scores were in the lower 40’s. Page loads were about 14 seconds (5-second improvement).

Tweaking WordPress

I set up an image compression plugin in WordPress then set up a cloud image compression and CDN Plugin from the same vendor.  Page Speed info here.

GT Metrix scores were now occasionally an “A” and Page Speed scores were in the lower 20’s. Page loads were about 3-5 seconds (10-second improvement).

A mixed bag from Vultr (more optimisation and performance improvements were needed).

This screenshot is showing poor www.gtmetrix.com scores , pool google page speed index scores and upgrading from 1GB to 2GB memory on my server.

Google Chrome Developer Console Audit Results on Vultr hosted website were not very good (I stopped checking as nothing helped).

This is a screenshot showing poor site performance (screenshot taken in Google Dev tools audit feature)

The problem was the Vultr server (400km away in Sydney) was offline (my issue) and everything above (adding more memory, adding 2x CDN’s (EWWW and Cloudflare), adding PHP Child workers etc) did not seem to help???

Enter UpCloud…

Recently, a friend sent a link to a blog article about a host called “UpCloud” who promised “Faster than SSD” performance.  This can’t be right: “Faster than SSD”? I was intrigued. I wanted to check it out as I thought nothing was faster than SSD (well, maybe RAM).

I signed up for a trial and ran a disk IO test (read the review here) and I was shocked. It’s fast. Very fast.

Summary: UpCloud was twice as fast (Disk IO and CPU) as Vultr (+ an optional $4/m firewall and $3/m for 1x backup).

This is a screenshot showing Vultr.com servers getting half the read and write disk io performance compared to upcloud.com.

fyi: Labels above are K Bytes per second. iozone loops through all file size from 4 KB to 16,348 KB and measures through the reads per second. To be honest, the meaning of the numbers doesn’t interest me, I just want to compare apples to apples.

This is am image showing iozone results breakdown chart (kbytes per sec on vertical axis, file size in horizontal axis and transfer size on third access)

(image snip from http://www.iozone.org/ which explains the numbers)

I might have to copy my website on UpCloud and see how fast it is.

Where to Deploy and Pricing

UpCloud Pricing: https://www.upcloud.com/pricing/

UpCloud Pricing

UpCloud does not have a data centre in Australia yet so why choose UpCloud?

Most of my site’s visitors are based in the US and UpCloud have disk IO twice as fast as Vultr (win-win?).  I could deploy to Chicago?

This image sows most of my visitors are in the US

My site’s traffic is growing and I need to ensure the site is fast enough in the future.

This image shows that most of my sites visitors are hitting my site on week days.

Creating an UpCloud VM

I used a friend’s referral code and signed up to create my first VM.

FYI: use my Referral code and get $25 free credit.  Sign up only takes 2 minutes.

https://www.upcloud.com/register/?promo=D84793

When you click the link above you will receive 25$ to try out serves for 3 days. You can exit his trail and deposit $10 into UpCloud.

Trial Limitations

The trial mode restrictions are as following:

* Cloud servers can only be accessed using SSH, RDP, HTTP or HTTPS protocols
* Cloud servers are not allowed to send outgoing e-mails or to create outbound SSH/RDP connections
* The internet connection is restricted to 100 Mbps (compared to 500 Mbps for non-trial accounts)
* After your 72 hours free trial, your services will be deleted unless you make a one-time deposit of $10

UpCloud Links

The UpCloud support page is located here: https://www.upcloud.com/support/

  • Quick start: Introduction to UpCloud
  • How to deploy a Cloud Server
  • Deploy a cloud server with UpCloud’s API

More UpCloud links to read:

  • Two-Factor Authentication on UpCloud
  • Floating IPs on UpCloud
  • How to manage your firewall
  • Finalizing deployment

Signing up to UpCloud

Navigate to https://upcloud.com/signup and add your username, password and email address and click signup.

New UpCloud Signup Page

Add your address and payment details and click proceed (you don’t need to pay anything ($1 may be charged and instantly refunded to verify the card)

Add address and payment details

That’s it, check yout email.

Signup Done

Look for the UpCloud email and click https://my.upcloud.com/

Check Email

Now login

Login to UpCloud

Now I can see a dashboard 🙂

UpCloud Dashboard

I was happy to see 24/7 support is available.

This image shows the www.upcloud.com live chat

I opted in for the new dashboard

UpCloud new new dashboard

Deploy My First UpCloud Server

This is how I deployed a server.

Note: If you are going to deploy a server consider using my referral code and get $25 credit for free.

Under the “deploy a server” widget I named the server and chose a location (I think I was supposed to use an FQDN name -e.g., “fearby.com”). The deployment worked though. I clicked continue, then more options were made available:

  1. Enter a short server description.
  2. Choose a location (Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago)
  3. Choose the number of CPU’s and amount of memory
  4. Specify disk number/names and type (MaxIOPS or HDD).
  5. Choose an Operating System
  6. Select a Timezone
  7. Define SSH Keys for access
  8. Allowed login methods
  9. Choose hardware adapter types
  10. Where the send the login password

Deploy Server

FYI: How to generate a new SSH Key (on OSX or Ubuntu)

ssh-keygen -t rsa

Output

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /temp/example_rsa
Enter passphrase (empty for no passphrase): *********************************
Enter same passphrase again:*********************************
Your identification has been saved in /temp/example_rsa.
Your public key has been saved in /temp/example_rsa.pub.
The key fingerprint is:
SHA256:########################### [email protected]
Outputted public and private key

Did the key export? (yes)

> /temp# ls /temp/ -al
> drwxr-xr-x 2 root root 4096 Jun 9 15:33 .
> drwxr-xr-x 27 root root 4096 Jun 8 14:25 ..
> -rw——- 1 user user 1766 Jun 9 15:33 example_rsa
> -rw-r–r– 1 user user 396 Jun 9 15:33 example_rsa.pub

“example_rsa” is the private key and “example_rsa.pub “is the public key.

  • The public key needs to be added to the server to allow access.
  • The private key needs to be added to any local ssh program used for remote access.

Initialisation script (after deployment)

I was pleased to see an initialization script section that calls actions after the server is deployed. I configured the initialisation script to pull down a few GB of backups from my Vultr website in Sydney (files now removed).

This was my Initialisation script:

#!/bin/bash
echo "Downloading the Vultr websites backups"
mkdir /backup
cd /backup
wget -o www-mysql-backup.sql https://fearby.com/.../www-mysql-backup.sql
wget -o www-blog-backup.zip https://fearby.com/.../www-blog-backup.zip

Confirm and Deploy

I clicked “Confirm and deploy” but I had an alert that said trial mode can only deploy servers up to 1024MB of memory.

This image shows I cant deploy servers with 2/GB in trial modeExiting UpCloud Trial Mode

I opened the dashboard and clicked My Account then Billing, I could see the $25 referral credit but I guess I can’t use that in Trial.

I exited trial mode by depositing $10 (USD).

View Billing Details

Make a manual 1-time deposit of $10 to exit trial mode.

Deposit $10 to exit the trial

FYI: Server prices are listed below (or view prices here).

UpCloud Pricing

Now I can go back and deploy the server with the same settings above (1x CPU, 2GB Memory, Ubuntu 18.04, MaxIOPS Storage etc)

Deployment takes a few minutes and depending on how you specified a password may be emailed to you.

UpCloud Server Deployed

The server is now deployed; now I can connect to it with my SSH program (vSSH).  Simply add the server’s IP, username, password and the SSH private key (generated above) to your ssh program of choice.

fyi: The public key contents start with “ssh-rsa”.

This image shows me connecting to my sever via ssh

I noticed that the initialisation script downloaded my 2+GB of files already. Nice.

UpCloud Billing Breakdown

I can now see on the UpCloud billing page in my dashboard that credit is deducted daily (68c); at this rate, I have 49 days credit left?

Billing Breakdown

I can manually deposit funds or set up automatic payments at any time 🙂

UpCloud Backup Options

You do not need to setup backups but in case you want to roll back (if things stuff up), it is a good idea. Backups are an additional charge.

I have set up automatic daily backups with an auto deletion after 2 days

To view backup scheduled click on your deployed server then click backup

List of UpCloud Backups

Note: Backups are charged at $0.056 for every GB stored – so $5.60 for every 100GB per month (half that for 50GB etc)

You can take manual backups at any time (and only be charged for the hour)

UpCloud Firewall Options

I set up a firewall at UpCloud to only allow the minimum number of ports (UpCloud DNS, HTTP, HTTPS and My IP to port 22).  The firewall feature is charged at $0.0056 an hour ($4.03 a month)

I love the ability to set firewall rules on incoming, destination and outgoing ports.

To view your firewall click on your deployed server then click firewall

UpCloud firewall

Update: I modified my firewall to allow inbound ICMP (IPv4/IPv6) and UDP (IPv4/IPv6) packets.

(Note: Old firewall screenshot)

Firewall Rules Allow port 80, 443 and DNS

Because my internet provider has a dynamic IP, I set up a VPN with a static IP and whitelisted it for backdoor access.

Local Ubuntu ufw Firewall

I duplicated the rules in my local ufw (2nd level) firewall (and blocked mail)

sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] 25                         DENY OUT    Anywhere                   (out)
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 22                         ALLOW IN    REMOVED (MY WHITELISTED IP))
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 25 (v6)                    DENY OUT    Anywhere (v6)              (out)
[10] 53                         ALLOW IN    2a04:3540:53::1
[11] 53                         ALLOW IN    2a04:3544:53::1

UpCloud Download Speeds

I pulled down a 1.8GB Ubuntu 18.08 Desktop ISO 3 times from gigenet.com and the file downloaded in 32 seconds (57MB/sec). Nice.

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:04-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso'

ubuntu-18.04-desktop-amd64.iso 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:02:37 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:46-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.1'

ubuntu-18.04-desktop-amd64.iso.1 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:19 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.1' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:03:23-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.2'

ubuntu-18.04-desktop-amd64.iso.2 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:56 (56.8 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.2' saved [1921843200/1921843200]

Install Common Ubuntu Packages

I installed common Ubuntu packages.

apt-get install zip htop ifstat iftop bmon tcptrack ethstatus speedometer iozone3 bonnie++ sysbench siege tree tree unzip jq jq ncdu pydf ntp rcconf ufw iperf nmap iozone3

Timezone

I checked the server’s time (I thought this was auto set before I deployed)?

$hwclock --show
2018-06-06 23:52:53.639378+0000

I reset the time to Australia/Sydney.

dpkg-reconfigure tzdata
Current default time zone: 'Australia/Sydney'
Local time is now: Thu Jun 7 06:53:20 AEST 2018.
Universal Time is now: Wed Jun 6 20:53:20 UTC 2018.

Now the timezone is set 🙂

Shell History

I increased the shell history.

HISTSIZEH =10000
HISTCONTROL=ignoredups

SSH Login

I created a ~/.ssh/authorized_keys file and added my SSH public key to allow password-less logins.

mkdir ~/.ssh
sudo nano ~/.ssh/authorized_keys

I added my pubic ssh key, then exited the ssh session and logged back in. I can now log in without a password.

Install NGINX

apt-get install nginx

nginx/1.14.0 is now installed.

A quick GT Metrix test.

This image shows awesome static nginx performance ratings of of 99%

Install MySQL

Run these commands to install and secure MySQL.

apt install mysql-server
mysql_secure_installation

Securing the MySQL server deployment.
> Would you like to setup VALIDATE PASSWORD plugin?: n
> New password: **********************************************
> Re-enter new password: **********************************************
> Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
> Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
> Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
> Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
> Success.

I disabled the validate password plugin because I hate it.

MySQL Ver 14.14 Distrib 5.7.22 is now installed.

Set MySQL root login password type

Set MySQL root user to authenticate via “mysql_native_password”. Run the “mysql” command.

mysql
SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | | auth_socket | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+----------

Now let’s set the root password authentication method to “mysql_native_password”

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '*****************************************';
Query OK, 0 rows affected (0.00 sec)

Check authentication method.

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | ######################################### | mysql_native_password | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+

Now we need to flush permissions.

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Done.

Install PHP

Install PHP 7.2

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php7.2
php -v

PHP 7.2.5, Zend Engine v3.2.0 with Zend OPcache v7.2.5-1 is now installed. Do update PHP frequently.

I made the following changes in /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Install PHP FPM

apt-get install php7.2-fpm

Configure PHP FPM config.

Edit /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Reload php sudo service.

php7.2-fpm restart service php7.2-fpm status

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Configuring NGINX

If you are not comfortable editing NGINX config files read here, here and here.

I made a new “www root” folder, set permissions and created a default html file.

mkdir /www-root
chown -R www-data:www-data /www-root
echo "Hello World" >> /www-root/index.html

I edited the “root” key in “/etc/nginx/sites-enabled/default” file and set the root a new location (e.g., “/www-root”)

I added these performance tweaks to /etc/nginx/nginx.conf

> worker_cpu_affinity auto;
> worker_rlimit_nofile 100000

I add the following lines to “http {” section in /etc/nginx/nginx.conf

client_max_body_size 10M;

gzip on;
gzip_disable "msie6";
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_types
application/atom+xml
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
font/opentype
image/bmp
image/x-icon
text/cache-manifest
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
#text/html is always compressed by gzip module

gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss te$

Check NGINX Status

service nginx status
* nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 21:16:28 AEST; 30min ago
Docs: man:nginx(8)
Main PID: # (nginx)
Tasks: 2 (limit: 2322)
CGroup: /system.slice/nginx.service
|- # nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
`- # nginx: worker process

Install Open SSL that supports TLS 1.3

This is a work in progress. The steps work just fine for me on Ubuntu 16.04. but not Ubuntu 18.04.?

Installing Adminer MySQL GUI

I will use the PHP based Adminer MySQL GUI to export and import my blog from one server to another. All I needed to do is install it on both servers (simple 1 file download)

cd /utils
wget -o adminer.php https://github.com/vrana/adminer/releases/download/v4.6.2/adminer-4.6.2-mysql-en.php

Use Adminer to Export My Blog (on Vultr)

On the original server open Adminer (http) and..

  1. Login with the MySQL root account
  2. Open your database
  3. Choose “Save” as the output
  4. Click on Export

This image shows the export of the wordpress adminer page

Save the “.sql” file.

I used Adminer on the UpCloud server to Import My Blog

FYI: Depending on the size of your database backup you may need to temporarily increase your upload and post sizes limits in PHP and NGINX before you can import your database.

Edit /etc/php/7.2/fpm/php.ini
> max_file_uploads = 100M
> post_max_size =100M

And Edit: /etc/nginx/nginx.conf
> client_max_body_size 100M;

Don’t forget to reload NGINX config and restart NGINX and PHP. Take note of the maximum allowed file size in the screenshot below. I temporarily increased my upload limits to 100MB in order to restore my 87MB blog.

Now I could open Adminer on my UpCloud server.

  1. Create a new database
  2. Click on the database and click Import
  3. Choose the SQL file
  4. Click Execute to import it

Import MuSQL backup with Adminer

Don’t forget to create a user and assign permissions (as required – check your wp-config.php file).

Import MySQL Database

Tip: Don’t forget to lower the maximum upload file size and max post size after you import your database,

Cloudflare DNS

I use Cloudflare to manage DNS, so I need to tell it about my new server.

You can get your server’s IP details from the UpCloud dashboard.

Find IP

At Cloudflare update your DNS details to point to the server’s new IPv4 (“A Record”) and IPv6 (“AAAA Record”).

Cloudflare DNS

Domain Error

I waited an hour and my website was suddenly unavailable.  At first, I thought this was Cloudflare forcing the redirection of my domain to HTTP (that was not yet set up).

DNS Not Replicated Yet

I chatted with UpCloud chat on their webpage and they kindly assisted me to diagnose all the common issues like DNS values, DNS replication, Cloudflare settings and the error was pinpointed to my NGINX installation.  All NGINX config settings were ok from what we could see?  I uninstalled NGINX and reinstalled it (and that fixed it). Thanks UpCloud Support 🙂

Reinstalled NGINX

sudo apt-get purge nginx nginx-common

I reinstalled NGINX and reconfigured /etc/nginx/nginx.conf (I downloaded my SSL cert from my old server just in case).

Here is my /etc/nginx/nginx.conf file.

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/www-nginxcriterror.log crit;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        client_max_body_size 10M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        server_tokens off;

        server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/www-access.log;
        error_log /var/log/nginx/www-error.log;

        gzip on;

        gzip_vary on;
        gzip_disable "msie6";
        gzip_min_length 256;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-available/default file (fyi, I have not fully re-setup TLS 1.3 yet so I commented out the settings)

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-root;

        # Listen Ports
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name www.fearby.com fearby.com localhost;

        # HTTPS Cert
        ssl_certificate /etc/nginx/ssl-cert-path/fearby.crt;
        ssl_certificate_key /etc/nginx/ssl-cert-path/fearby.key;
        ssl_dhparam /etc/nginx/ssl-cert-path/dhparams4096.pem;

        # HTTPS Ciphers
        
        # TLS 1.2
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

        # TLS 1.3			#todo
        # ssl_ciphers 
        # ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DES-CBC3-SHA;
        # ssl_ecdh_curve secp384r1;

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # HTTPS Settings
        server_tokens off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 30m;
        ssl_session_tickets off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
	#ssl_stapling on; 						# Requires nginx >= 1.3.7

        # Cloudflare DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;

        # PHP Memory 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
            try_files $uri =404;
            # include snippets/fastcgi-php.conf;

            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php7.2-fpm.sock;

            # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
            # fastcgi_pass 127.0.0.1:9000;
	    }

        location / {
            # try_files $uri $uri/ =404;
            try_files $uri $uri/ /index.php?q=$uri&$args;
            index index.php index.html index.htm;
            proxy_set_header Proxy "";
        }

        # Deny Rules
        location ~ /.ht {
                deny all;
        }
        location ~ ^/.user.ini {
            deny all;
        }
        location ~ (.ini) {
            return 403;
        }

        # Headers
        location ~* .(?:ico|css|js|gif|jpe?g|png|js)$ {
            expires 30d;
            add_header Pragma public;
            add_header Cache-Control "public";
        }

}

SSL Labs SSL Certificate Check

All good thanks to the config above.

SSL Labs

Install WP-CLI

I don’t like setting up FTP to auto-update WordPress plugins. I use the WP-CLI tool to manage WordPress installations by the command line. Read my blog here on using WP-CLI.

Download WP-CLI

mkdir /utils
cd /utils
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Move WP-CLI to the bin folder as “wp”

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Test wp

wp --info
OS: Linux 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
Shell: /bin/bash
PHP binary: /usr/bin/php7.2
PHP version: 7.2.5-1+ubuntu18.04.1+deb.sury.org+1
php.ini used: /etc/php/7.2/cli/php.ini
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /www-root
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.1

Update WordPress Plugins

Now I can run “wp plugin update” to update all WordPress plugins

wp plugin update
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/plugin/wordfence.7.1.7.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wp-meta-seo.3.7.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wordpress-seo.7.6.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Disabling Maintenance mode...
Success: Updated 3 of 3 plugins.
+---------------+-------------+-------------+---------+
| name | old_version | new_version | status |
+---------------+-------------+-------------+---------+
| wordfence | 7.1.6 | 7.1.7 | Updated |
| wp-meta-seo | 3.7.0 | 3.7.1 | Updated |
| wordpress-seo | 7.5.3 | 7.6.1 | Updated |
+---------------+-------------+-------------+---------+

Update WordPress Core

WordPress core file can be updated with “wp core update“

wp core update
Success: WordPress is up to date.

Troubleshooting: Use the flag “–allow-root “if wp needs higher access (unsafe action though).

Install PHP Child Workers

I edited the following file to setup PHP child workers /etc/php/7.2/fpm/pool.d/www.conf

Changes

> pm = dynamic
> pm.max_children = 40
> pm.start_servers = 15
> pm.min_spare_servers = 5
> pm.max_spare_servers = 15
> pm.process_idle_timeout = 30s;
> pm.max_requests = 500;
> php_admin_value[error_log] = /var/log/www-fpm-php.www.log
> php_admin_value[memory_limit] = 512M

Restart PHP

sudo service php7.2-fpm restart

Test NGINX config, reload NGINX config and restart NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Output (14 workers are ready)

Check PHP Child Worker Status

sudo service php7.2-fpm status
* php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 19:32:47 AEST; 20s ago
Docs: man:php-fpm7.2(8)
Main PID: # (php-fpm7.2)
Status: "Processes active: 0, idle: 15, Requests: 2, slow: 0, Traffic: 0.1req/sec"
Tasks: 16 (limit: 2322)
CGroup: /system.slice/php7.2-fpm.service
|- # php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
- # php-fpm: pool www

Memory Tweak (set at your own risk)

sudo nano /etc/sysctl.conf

vm.swappiness = 1

Setting swappiness to a value of 1 all but disables the swap file and tells the Operating System to aggressively use ram, a value of 10 is safer. Only set this if you have enough memory available (and free).

Possible swappiness settings:

> vm.swappiness = 0 Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition when free memory will be below vm.min_free_kbytes limit, but in later versions, this is achieved by setting to 1.[2]> vm.swappiness = 1 Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
> vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system.[3]
> vm.swappiness = 60 The default value.
> vm.swappiness = 100 The kernel will swap aggressively.

The “htop” tool is a handy memory monitoring tool to “top”

Also, you can use good old “watch” command to show near-live memory usage (auto-refreshes every 2 seconds)

watch -n 2 free -m

Script to auto-clear the memory/cache

As a habit, I am setting up a cronjob to check when free memory falls below 100MB, then the cache is automatically cleared (freeing memory).

Script Contents: clearcache.sh

#!/bin/bash

# Script help inspired by https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
ram_use=$(free -m)
IFS=

I set the cronjob to run every 15 mins, I added this to my cronjob.

SHELL=/bin/bash
*/15  *  *  *  *  root /bin/bash /scripts/clearcache.sh >> /scripts/clearcache.log

Sample log output

2018-06-10 01:13:22 RAM OK (Total: 1993 MB, Used: 981 MB, Free: 387 MB)
2018-06-10 01:15:01 RAM OK (Total: 1993 MB, Used: 974 MB, Free: 394 MB)
2018-06-10 01:20:01 RAM OK (Total: 1993 MB, Used: 955 MB, Free: 412 MB)
2018-06-10 01:25:01 RAM OK (Total: 1993 MB, Used: 1002 MB, Free: 363 MB)
2018-06-10 01:30:01 RAM OK (Total: 1993 MB, Used: 970 MB, Free: 394 MB)
2018-06-10 01:35:01 RAM OK (Total: 1993 MB, Used: 963 MB, Free: 400 MB)
2018-06-10 01:40:01 RAM OK (Total: 1993 MB, Used: 976 MB, Free: 387 MB)
2018-06-10 01:45:01 RAM OK (Total: 1993 MB, Used: 985 MB, Free: 377 MB)
2018-06-10 01:50:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 379 MB)
2018-06-10 01:55:01 RAM OK (Total: 1993 MB, Used: 979 MB, Free: 382 MB)
2018-06-10 02:00:01 RAM OK (Total: 1993 MB, Used: 980 MB, Free: 380 MB)
2018-06-10 02:05:01 RAM OK (Total: 1993 MB, Used: 971 MB, Free: 389 MB)
2018-06-10 02:10:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 376 MB)
2018-06-10 02:15:01 RAM OK (Total: 1993 MB, Used: 967 MB, Free: 392 MB)

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

sudo cat /proc/sys/vm/swappiness
1

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may lose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

tune2fs -o journal_data_writeback /dev/vda1

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

sudo nano /etc/fstab

I added “writeback,noatime,nodiratime” flags to my disk options.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options> <dump>  <pass>
# / was on /dev/vda1 during installation
#                <device>                 <dir>           <fs>    <options>                                             <dump>  <fsck>
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro,data=writeback,noatime,nodiratime   0       1

Updating Ubuntu Packages

Show updatable packages.

apt-get -s dist-upgrade | grep "^Inst"

Update Packages.

sudo apt-get update && sudo apt-get upgrade

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

sudo apt-get install unattended-upgrades

Enable Unattended Upgrades.

sudo dpkg-reconfigure --priority=low unattended-upgrades

Now I configure what packages not to auto-update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

Unattended-Upgrade::Package-Blacklist {
"nginx";
"nginx-common";
"nginx-core";
"php7.2";
"php7.2-fpm";
"mysql-server";
"mysql-server-5.7";
"mysql-server-core-5.7";
"libssl1.0.0";
"libssl1.1";
};

FYI: You can find installed packages by running this command:

apt list --installed

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

unattended-upgrade -d

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows creating democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepaid model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user-defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

Disable dasjhicons.min.css (for unauthenticated WordPress users).

Find functions.php in the www root

sudo find . -print |grep  functions.php

Edit functions.php

sudo nano ./wp-includes/functions.php

Add the following

// Remove dashicons in frontend for unauthenticated users
add_action( 'wp_enqueue_scripts', 'bs_dequeue_dashicons' );
function bs_dequeue_dashicons() {
    if ( ! is_user_logged_in() ) {
        wp_deregister_style( 'dashicons' );
    }
}

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers

server {
        root /www;

        ...
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;
        ...

I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

location = /http2/push_demo.html {
        http2_push /http2/pushed.css;
        http2_push /http2/pushedimage1.jpg;
        http2_push /http2/pushedimage2.jpg;
        http2_push /http2/pushedimage3.jpg;
}

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

location / {
        ...
        http2_push /https://fearby.com/wp-includes/js/jquery/jquery.js;
        http2_push /wp-content/themes/news-pro/images/favicon.ico;
        ...
}

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

WordPress wp-config.php tweaks

# Memory
define('WP_MEMORY_LIMIT','1024M');
define('WP_MAX_MEMORY_LIMIT','1024M');
set_time_limit (60);

# Security
define( 'FORCE_SSL_ADMIN', true);

# Disable Updates
define( 'WP_AUTO_UPDATE_CORE', false );
define( 'AUTOMATIC_UPDATER_DISABLED', true );

# ewww.io
define( 'WP_AUTO_UPDATE_CORE', false );

Add 2FA Authentication to server logins.

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress too.

Tweaks Todo

  • Compress placeholder BJ Lazy Load Image (plugin is broken)
  • Solve 2x Google Analytics tracker redirects (done, switched to Matomo)

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

Results

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

n' read -rd '' -a ram_use_arr <<< "$ram_use" ram_use="${ram_use_arr[1]}" ram_use=$(echo "$ram_use" | tr -s " ") IFS=' ' read -ra ram_use_arr <<< "$ram_use" ram_total="${ram_use_arr[1]}" ram_used="${ram_use_arr[2]}" ram_free="${ram_use_arr[3]}" d=`date '+%Y-%m-%d %H:%M:%S'` if ! [[ "$ram_free" =~ ^[0-9]+$ ]]; then echo "Sorry ram_free is not an integer" else if [ "$ram_free" -lt "100" ]; then echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB) - Clearing Cache..." sync; echo 1 > /proc/sys/vm/drop_caches sync; echo 2 > /proc/sys/vm/drop_caches #sync; echo 3 > /proc/sys/vm/drop_caches #Not advised in production # Read for more info https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ exit 1 else if [ "$ram_free" -lt "256" ]; then echo "$d RAM ALMOST LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else if [ "$ram_free" -lt "512" ]; then echo "$d RAM OK (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 fi fi fi fi

I set the cronjob to run every 15 mins, I added this to my cronjob.

 

Sample log output

 

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

 

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

 

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may loose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

 

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

 

I added “writeback,noatime,nodiratime” flags to my disk options.

 

Updating Ubuntu Packages

Show updatable packages.

 

Update Packages.

 

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

 

Enable Unattended Upgrades.

 

Now I configure what packages not to auto update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

 

FYI: You can find installed packages by running this command:

 

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

 

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows to create democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepay model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

2FA Authentication at login

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress aswel.

Performance

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

Results

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

I hope this guide helps someone.

Free Credit

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.2 Converting to Blocks

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

Filed Under: CDN, Cloud, Cloudflare, Cost, CPanel, Digital Ocean, DNS, Domain, ExactDN, Firewall, Hosting, HTTPS, MySQL, MySQLGUI, NGINX, Performance, PHP, php72, Scalability, TLS, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: draft, GTetrix, host, IOPS, Load Time, maxIOPS, MySQL, nginx, Page Speed Insights, Performance, php, SSD, ubuntu, UpCloud, vm

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

Read Part 1, Part 2, Part 3 or Part 4

I used these commands to generate bonnie++ reports from the data in part 2

echo "<h1>Bonnie Results</h1>" > /www-data/bonnie.html
echo "<h2>Vultr (Sydney)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528177870,4G,,656,99,308954,68,113706,33,1200,92,188671,30,10237,251,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26067us,119ms,179ms,29139us,26069us,16118us,1463us,703us,880us,263us,119us,593us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>Digital Ocean (London)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528186398,4G,,699,99,778636,74,610414,60,1556,99,1405337,59,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17678us,10099us,17014us,7027us,3067us,2366us,1243us,376us,611us,108us,59us,181us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>UpCloud (Singapore)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528226703,4G,,1014,99,407179,24,366622,32,2137,99,451886,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11297us,54232us,16443us,4949us,44883us,1595us,264us,340us,561us,138us,66us,327us" | bon_csv2html >> /www-data/bonnie.html

Image of results here

Bonnie Results

Network Performace

IMHO Network Latency is the biggest impact on server performance, Read my old post on scalability on a budget here. I am in Australia an having a server in Singapore was too far away and latency was terrible.

Here is a non-scientific example of pinging a Vultr, Digital Ocean and UpCloud server in three different locations (and Google).

Ping Test

Test Ping Results

  • Vultr 132ms Ping Average (Sydney)
  • Digital Ocean 322ms Ping Average (London)
  • UpCloud 180ms Ping Average (Singapore)

Latency matters, run a https://www.webpagetest.org/ scan over your site to see why.

Adding https added almost 0.7 seconds to https communications in the past on Digital Ocean (a few thousand kilometres away). The longer the latency the longer HTTPS handshakes take.

SSL

Deploying a server to Singapore (in my experience) is bad if your visitors are in Australia. But deploying to other regions may be lower in cost though. It’s a trade-off.

Server Location

Deploy servers as close as you can to your customers is the best tip for performance.

Deploy serves close to your customers

Also, consider setting up Image Optimization and Image CDN plugins (guide here) in WordPress and using Cloudflare (guide here)

Benchmarking with SysBench

Install CPU Benchmark

sudo apt-get install sysbench

CPU Benchmark (Vultr/Sydney)

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          39.1700s
    total number of events:              10000
    total time taken by event execution: 39.1586
    per-request statistics:
         min:                                  2.90ms
         avg:                                  3.92ms
         max:                                 20.44ms
         approx.  95 percentile:               7.43ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   39.1586/0.00

39.15 seconds

CPU Benchmark (Digital Ocean/London)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          33.4382s
    total number of events:              10000
    total time taken by event execution: 33.4352
    per-request statistics:
         min:                                  3.24ms
         avg:                                  3.34ms
         max:                                  6.45ms
         approx.  95 percentile:               3.45ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   33.4352/0.00

33.43 sec

CPU Benchmark (UpCloud/Singapore)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1



Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          23.7809s
    total number of events:              10000
    total time taken by event execution: 23.7780
    per-request statistics:
         min:                                  2.35ms
         avg:                                  2.38ms
         max:                                  6.92ms
         approx.  95 percentile:               2.46ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   23.7780/0.00

23.77 sec

Surprisingly, 1st place in prime generation goes to UpCloud, then Digital Ocean then Vultr.  UpCloud has some good processors.

Processors:

  • UpCLoud (Singapore): Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz
  • Digital Ocean (London): Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
  • Vultr (Sydney): Virtual CPU a7769a6388d5 (Masked/Hidden CPU @ 2.40GHz)

(Lower is better)

prime benchmark results

(oops, typo in the chart should say Vultr)

Benchmark the file IO

Confirm free space

df -h /

Install Sysbench

sudo apt-get install sysbench

I had 10GB free on all servers (Vultr, Digitial Ocean and UpCloud) so I created a 10GB test file.

sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

128 files, 81920Kb each, 10240Mb total
Creating files for the test...

Now I can run the benchmark and use the pre-created text file.

sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run

SysBench description from the Ubuntu manpage.

“SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS parameters that are important for a system running a database under intensive load. The idea of this benchmark suite is to quickly get an impression about system performance without setting up complex database benchmarks or even without installing a database at all.”

SysBench Results (Vultr/Sydney)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  385920 Read, 257280 Write, 823266 Other = 1466466 Total
Read 5.8887Gb  Written 3.9258Gb  Total transferred 9.8145Gb  (33.5Mb/sec)
 2143.98 Requests/sec executed

Test execution summary:
    total time:                          300.0026s
    total number of events:              643200
    total time taken by event execution: 182.4249
    per-request statistics:
         min:                                  0.01ms
         avg:                                  0.28ms
         max:                                 18.12ms
         approx.  95 percentile:               0.55ms

Threads fairness:
    events (avg/stddev):           643200.0000/0.00
    execution time (avg/stddev):   182.4249/0.00

SysBench Results (Digital Ocean/London)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  944280 Read, 629520 Write, 2014432 Other = 3588232 Total
Read 14.409Gb  Written 9.6057Gb  Total transferred 24.014Gb  (81.968Mb/sec)
 5245.96 Requests/sec executed

Test execution summary:
    total time:                          300.0024s
    total number of events:              1573800
    total time taken by event execution: 160.5558
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.10ms
         max:                                 18.62ms
         approx.  95 percentile:               0.34ms

Threads fairness:
    events (avg/stddev):           1573800.0000/0.00
    execution time (avg/stddev):   160.5558/0.00

SysBench Results (UpCloud/Singapore)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  994320 Read, 662880 Write, 2121090 Other = 3778290 Total
Read 15.172Gb  Written 10.115Gb  Total transferred 25.287Gb  (86.312Mb/sec)
 5523.97 Requests/sec executed

Test execution summary:
    total time:                          300.0016s
    total number of events:              1657200
    total time taken by event execution: 107.4434
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.06ms
         max:                                 15.43ms
         approx.  95 percentile:               0.13ms

Threads fairness:
    events (avg/stddev):           1657200.0000/0.00
    execution time (avg/stddev):   107.4434/0.00

Comparison

Sysbench Results table

sysbench fileio results (text)

Read

  • Vultr (Sydney): 385,920
  • Digital Ocean (London): 944,280
  • UpCloud (Singapore): 944,320

Write

  • Vultr (Sydney): 823,266
  • Digital Ocean (London): 629,520
  • UpCloud (Singapore): 662,880

Other

  • Vultr (Sydney): 1,466,466
  • Digital Ocean (London): 3,588,232
  • UpCloud (Singapore): 2,121,090

Total Read Gb

  • Vultr (Sydney): 5.8887 Gb
  • Digital Ocean (London): 14.409 Gb
  • UpCloud (Singapore): 15.172 Gb

Total Written Gb

  • Vultr (Sydney): 3.9258 Gb
  • Digital Ocean (London): 9.6057 Gb
  • UpCloud (Singapore): 10.115 Gb

Total Transferred Gb

  • Vultr (Sydney): 9.8145 Gb
  • Digital Ocean (London): 24.014 Gb
  • UpCloud (Singapore): 25.287 Gb

Now I can remove test file io benchmark file

sysbench --test=fileio --file-total-size=2=10G cleanup
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Removing test files...

Confirm the test file has been deleted

df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G   16G   23G  41% /

Bonus: Benchmark MySQL (on my main server (Vultr) not on Digital Ocean and UpCLoud)

I tried to run a command

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=#################################### prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

FATAL: unable to connect to MySQL server, aborting...
FATAL: error 1049: Unknown database 'test'
FATAL: failed to connect to database server!

To fix the error I created a test table with Adminer (guide here).

Create Test Table

< Previous – Next >

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, ExactDN, Hosting, Performance, PHP, php72, Scalability, Scalable, Server, Speed, Storage, Ubuntu, UI, UpCloud, VM, Vultr Tagged With: and, can, comparing, Concurrent, cpu, digital ocean, Disk, etc, How, Latency, measure, on, Performance, ubuntu, UpCloud - Part 3 of 4, Users, vm, vultr, you

Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap

March 13, 2018 by Simon

This guide will show how you can set up a website to use Cloudflare on a VM hosted on Vultr and Namecheap

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. This post will show how to let Cloudflare handle the DNS for the domain.

Update 2018: For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Snip from here “Cloudflare’s enterprise-class web application firewall (WAF) protects your Internet property from common vulnerabilities like SQL injection attacks, cross-site scripting, and cross-site forgery requests with no changes to your existing infrastructure.”

Buy a Domain 

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Cloudflare Benefits (Free Plan)

  • DDoS Attack Protection (Huge network to absorb attacks DDoS attacks over 600Gbps are no problem for our 15 Tbps networks)
  • Global CDN
  • Shared SSL certificate (I disabled this and opted to use my own)
  • Access to audit logs
  • 3 page rules (maximum)

View paid plan options here.

Cloudflare CDN map

Cloudflare CDN says it can load assets up to 2x faster, 60% less bandwidth from your servers by delivering assets from 127 data centres.

Cloudflare Global Network

Setup

You will need to sign up at cloudflare.com

Cloudflare

After you create an account you will be prompted to add a siteAdd SiteCloudflare will pull your public DNS records to import.

Query DNS

You will be prompted to select a plan (I selected free)

Plan Select

Verify DNS settings to import.

DNS Import

You will now be asked to change your DNS nameservers with your domain reseller

DNS Nameservers

TIP: If you have an SSL cert (e.g Lets Encrypt) already setup head to the crypto section and select ” Full (Strict)” to prevent ERR_TOO_MANY_REDIRECTS errors.

Strict SSL

Cloudflare UI

I asked Twitter if they could kindly load my site so I could see if Cloudflare dashboard/stats were loading.

Could I kindly ask if you are reading this that you visit https://t.co/9x5TFARLCt, I am writing a @Cloudflare blog post and need to screenshot stats. Thanks in advance

— Simon Fearby (Developer) (@FearbySoftware) March 13, 2018

The Cloudflare CTO responded.  🙂

Sure thing 🙂

— John Graham-Cumming (@jgrahamc) March 13, 2018

Confirm Cloudflare link to a domain from the OSX Comand line

host -t NS fearby.com
fearby.com name server dane.ns.cloudflare.com.
fearby.com name server nora.ns.cloudflare.com.

Caching Rule

I set up the following caching rule to cache everything for 8 hours instead of WordPress pages

Page Rules

“fearby.com.com/wp-*” Cache level: Bypass

“fearby.com.com/wp-admin/post.php*” Cache level: Bypass

“fearby.com/*” Cache Everything, Edge Cache TTL: 8 Hours

Cache Results

Cache appears to be sitting at 50% after 12 hours.  having cache os dynamic pages out there is ok unless I need to fix a typo, then I need to login to Cloudflare and clear the cache manually (or wait 8 hours)

Performance after a few hours

DNS times in gtmetrix have now fallen to a sub 200ms (Y Slow is now a respectable A, it was a C before).  I just need to wait for caching and minification to kick in.

DNS Improved

webpagetest.org results are awesome

See here: https://www.webpagetest.org/result/180314_PB_7660dfbe65d56b94a60d7a604ca250b3/

  • Load Time: 1.80s
  • First Byte 0.176s
  • Start Render 1.200s

webpagetest

Google Page Speed Insights Report

Mobile: 78/100

Desktop: 87/100

Check with https://developers.google.com/speed/pagespeed/insights/

Update 24th March 2018 Attacked?

I noticed a spike in and traffic (incoming and threats) on the 24th of March 2018.

I logged into Cloudflare on my mobile device and turned on Under Attack Mode.

Under Attack Flow

Cloudflare was now adding a delay screen in the middle of my initial page load. Read more here.  A few hours after the Attach started it was over.

After the Attack

I looked at the bandwidth and found no increase in traffic from my initial host VM. Nice.

cloudflare-attack-001

Thanks, Cloudflare.

Cloudflare Pros

  • Enabling Attack mode was simple.
  • Soaked up an attack.
  • Free Tier
  • Many Reports
  • Option to force HTTPS over HTTP
  • Option to ban/challenge suspicious IP’s and set challenge timeframes.
  • Ability to setup IP firewall rules and Application Firewalls.
  • User-agent blocking
  • Lockdown URL’s to IP’s (pro feature)
  • Option to minify Javascript, CSS and HTML
  • Option to accelerate mobile links
  • Brotli compression on assets served.
  • Optio to enable BETA Rocket loader for Javascript performance tweaks.
  • Run Javascript service workers from the 120+ CDN’s
  • Page/URL rules o perform custom actions (redirects, skip cache, Encryption etc)
  • HTTP/2 on, IPV6 ON
  • Option to setup load balancing/failover
  • CTO of Cloudflare responded in Twitter 🙂
  • Option to enable rate limiting (charged at 10,000 hits for $0.05c)
  • Option to block countries (pro feature)
  • Option to install apps in Cloudflare like(Goole Analytics,

Cloudflare Cons

  • No more logging into NameCheap to perform DNS management (I now goto Cloudflare, Namecheap are awesome).
  • Cloudflare Support was slow/confusing (I ended up figuring out the redirect problem myself).
  • Some sort of verify Cloudflare Setup/DNS/CDN access would be nice. After I set this up my gtmetrix load times were the same and I was not sure if DNS needs to replicate? Changing minify settings in Cloudflare did not seem to happen.
  • WordPress draft posts are being cached even though page riles block wp-admin page caching.
  • Would be nice to have ad automatic Under Attack mode
  • Now all sub-domains were transferred in the setup ( id did not know for weeks)

Cloudflare status

Check out https://www.cloudflarestatus.com/ for status updates.

Don’t forget to install the CloudFlare Plugin for WordPress if you use WordPress.

More Reading

Check out my OWASP Zap and Kali Linux self-application Penetration testing posts.

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.8 host Command from the OSX CLI

v1.7 Subdomain error

v1.6 Cloudflare Attack

v1.5 WordPress Plugin

v1.4 More Reading

v1.3 added WAF snip

v1.2 Added Google Page Speed Insights and webpage rest results

v1.1 Added Y-Slow

v1.0 Initial post

Filed Under: Analytics, App, Cache, CDN, Cloud, Cloudflare, DNS, Domain, Hosting, LetsEncrypt, Marketing, Secure, Security, SEO, Server, VM, Vultr, Website, Wordpress Tagged With: a, and, Cloudflare, hosted, namecheap, on, Setting, to, up, use, vm, vultr, website

Securing Ubuntu in the cloud

August 9, 2017 by Simon

It is easy to deploy servers to the cloud within a few minutes, you can have a cloud-based server that you (or others can use). ubuntu has a great guide on setting up basic security issues but what do you need to do.

If you do not secure your server expects it to be hacked into. Below are tips on securing your cloud server.

First, read more on scanning your server with Lynis security scan.

Always use up to date software

Always use update software, malicious users can detect what software you use with sites like shodan.io (or use port scan tools) and then look for weaknesses from well-published lists (e.g WordPress, Windows, MySQL, node, LifeRay, Oracle etc). People can even use Google to search for login pages or sites with passwords in HTML (yes that simple).  Once a system is identified by a malicious user they can send automated bots to break into your site (trying millions of passwords a day) or use tools to bypass existing defences (Security researcher Troy Hunt found out it’s child’s play).

Portscan sites like https://mxtoolbox.com/SuperTool.aspx?action=scan are good for knowing what you have exposed.

You can also use local programs like nmap to view open ports

Instal nmap

sudo apt-get install nmap

Find open ports

nmap -v -sT localhost

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-08 23:57 AEST
Initiating Connect Scan at 23:57
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 9101/tcp on 127.0.0.1
Discovered open port 9102/tcp on 127.0.0.1
Discovered open port 9103/tcp on 127.0.0.1
Completed Connect Scan at 23:57, 0.05s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00020s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
3306/tcp open  mysql
9101/tcp open  jetdirect
9102/tcp open  jetdirect
9103/tcp open  jetdirect

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Limit ssh connections

Read more here.

Use ufw to set limits on login attempts

sudo ufw limit ssh comment 'Rate limit hit for openssh server'

Only allow known IP’s access to your valuable ports

sudo ufw allow from 123.123.123.123/32 to any port 22

Delete unwanted firewall rules

sudo ufw status numbered
sudo ufw delete 8

Only allow known IP’s to certain ports

sudo ufw allow from 123.123.123.123 to any port 80/tcp

Also, set outgoing traffic to known active servers and ports

sudo ufw allow out from 123.123.123.123 to any port 22

Don’t use weak/common Diffie-Hellman key for SSL certificates, more information here.

openssl req -new -newkey rsa:4096 -nodes -keyout server.key -out server.csr
 
Generating a 4096 bit RSA private key
...

More info on generating SSL certs here and setting here and setting up Public Key Pinning here.

Intrusion Prevention Software

Do run fail2ban: Guide here https://www.linode.com/docs/security/using-fail2ban-for-security

I use iThemes Security to secure my WordPress and block repeat failed logins from certain IP addresses.

iThemes Security can even lock down your WordPress.

You can set iThemes to auto lock out users on x failed logins

Remember to use allowed whitelists though (it is so easy to lock yourself out of servers).

Passwords

Do have strong passwords and change the root password provided by the hosts. https://howsecureismypassword.net/ is a good site to see how strong your password is from brute force password attempts. https://www.grc.com/passwords.htm is a good site to obtain a strong password.  Do follow Troy Hunt’s blog and twitter account to keep up to date with security issues.

Configure a Firewall Basics

You should install a firewall on your Ubuntu and configure it and also configure a firewall with your hosts (e.g AWS, Vultr, Digital Ocean).

Configure a Firewall on AWS

My AWS server setup guide here. AWS allow you to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

Configure a Firewall on Digital Ocean

Configuring a firewall on Digital Ocean (create a $5/m server here).  You can configure your Digital Ocean droplet firewall by clicking Droplet, Networking then Manage Firewall after logging into Digital Ocean.

Configure a Firewall on Vultr

Configuring a firewall on Vultr (create a $2.5/m server here).

Don’t forget to set IP rules for IPV4 and IPV6, Only set the post you need to allow and ensure applications have strong passwords.

Ubuntu has a firewall built in (documentation).

sudo ufw status

Enable the firewall

sudo ufw enable

Adding common ports

sudo ufw allow ssh/tcp
sudo ufw logging on
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 53
sudo ufw allow 443
sudo ufw allow 873
sudo ufw enable
sudo ufw status
sudo ufw allow http
sudo ufw allow https

Add a whitelist for your IP (use http://icanhazip.com/ to get your IP) to ensure you won’t get kicked out of your server.

sudo ufw allow from 123.123.123.123/24 to any port 22

More help here.  Here is a  good guide on ufw commands. Info on port numbers here.

https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

If you don’t have a  Digital Ocean server for $5 a month click here and if a $2.5 a month Vultr server here.

Backups

rsync is a good way to copy files to another server or use Bacula

sudo apt install bacula

Basics

Initial server setup guide (Digital Ocean).

Sudo (admin user)

Read this guide on the Linux sudo command (the equivalent if run as administrator on Windows).

Users

List users on an Ubuntu OS (or compgen -u)

cut -d: -f1 /etc/passwd

Common output

cut -d: -f1 /etc/passwd
root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy
www-data
backup
list
irc
gnats
nobody
systemd-timesync
systemd-network
systemd-resolve
systemd-bus-proxy
syslog
_apt
lxd
messagebus
uuidd
dnsmasq
sshd
pollinate
ntp
mysql
clamav

Add User

sudo adduser new_username

e.g

sudo adduser bob
Adding user `bob' ...
Adding new group `bob' (1000) ...
Adding new user `bob' (1000) with group `bob' ...
Creating home directory `/home/bob' ...
etc..

Add user to a group

sudo usermod -a -G MyGroup bob

Show users in a group

getent group MyGroup | awk -F: '{print $4}'

This will show users in a group

Remove a user

sudo userdel username
sudo rm -r /home/username

Rename user

usermod -l new_username old_username

Change user password

sudo passwd username

Groups

Show all groups

compgen -ug

Common output

compgen -g
root
daemon
bin
sys
adm
tty
disk
lp
mail
proxy
sudo
www-data
backup
irc
etc

You can create your own groups but first, you must be aware of group ids

cat /etc/group

Then you can see your systems groups and ids.

Create a group

groupadd -g 999 MyGroup

Permissions

Read this https://help.ubuntu.com/community/FilePermissions

How to list users on Ubuntu.

Read more on setting permissions here.

Chmod help can be found here.

Install Fail2Ban

I used this guide on installing Fail2Ban.

apt-get install fail2ban

Check Fail2Ban often and add blocks to the firewall of known bad IPs

fail2ban-client status

Best practices

Ubuntu has a guide on basic security setup here.

Startup Processes

It is a good idea to review startup processes from time to time.

sudo apt-get install rcconf
sudo rcconf

Accounts

  • Read up on the concept of least privilege access for apps and services here.
  • Read up on chmod permissions.

Updates

Do update your operating system often.

sudo apt-get update
sudo apt-get upgrade

Minimal software

Only install what software you need

Exploits and Keeping up to date

Do keep up to date with exploits and vulnerabilities

  • Follow 0xDUDE on twitter.
  • Read the GDI.Foundation page.
  • Visit the Exploit Database
  • Vulnerability & Exploit Database
  • Subscribe to the Security Now podcast.

Secure your applications

  • NodeJS: Enable logging in applications you install or develop.

Ban repeat Login attempts with FailBan

Fail2Ban config

sudo nano /etc/fail2ban/jail.conf
[sshd]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3

Hosts File Hardening

sudo nano /etc/host.conf

Add

order bind,hosts
nospoof on

Add a whitelist with your ip on /etc/fail2ban/jail.conf (see this)

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not                          
# ban a host which matches an address in this list. Several addresses can be                             
# defined using space separator.
                                                                         
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8

Restart the service

sudo service fail2ban restart
sudo service fail2ban status

Intrusion detection (logging) systems

Tripwire will not block or prevent intrusions but it will log and give you a heads up with risks and things of concern

Install Tripwire.

sudo apt-get install tiger tripwire

Running Tripwire

sudo tiger

This will scan your system for issues of note

sudo tiger
Tiger UN*X security checking system
   Developed by Texas A&M University, 1994
   Updated by the Advanced Research Corporation, 1999-2002
   Further updated by Javier Fernandez-Sanguino, 2001-2015
   Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
   Covered by the GNU General Public License (GPL)

Configuring...

Will try to check using config for 'x86_64' running Linux 4.4.0-89-generic...
--CONFIG-- [con005c] Using configuration files for Linux 4.4.0-89-generic. Using
           configuration files for generic Linux 4.
Tiger security scripts *** 3.2.3, 2008.09.10.09.30 ***
20:42> Beginning security report for simon.
20:42> Starting file systems scans in background...
20:42> Checking password files...
20:42> Checking group files...
20:42> Checking user accounts...
20:42> Checking .rhosts files...
20:42> Checking .netrc files...
20:42> Checking ttytab, securetty, and login configuration files...
20:42> Checking PATH settings...
20:42> Checking anonymous ftp setup...
20:42> Checking mail aliases...
20:42> Checking cron entries...
20:42> Checking 'services' configuration...
20:42> Checking NFS export entries...
20:42> Checking permissions and ownership of system files...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Checking for indications of break-in...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Performing rootkit checks...
20:42> Performing system specific checks...
20:46> Performing root directory checks...
20:46> Checking for secure backup devices...
20:46> Checking for the presence of log files...
20:46> Checking for the setting of user's umask...
20:46> Checking for listening processes...
20:46> Checking SSHD's configuration...
20:46> Checking the printers control file...
20:46> Checking ftpusers configuration...
20:46> Checking NTP configuration...
20:46> Waiting for filesystems scans to complete...
20:46> Filesystems scans completed...
20:46> Performing check of embedded pathnames...
20:47> Security report completed for simon.
Security report is in `/var/log/tiger/security.report.simon.170809-20:42'.

My Output.

sudo nano /var/log/tiger/security.report.username.170809-18:42

Security scripts *** 3.2.3, 2008.09.10.09.30 ***
Wed Aug  9 18:42:24 AEST 2017
20:42> Beginning security report for username (x86_64 Linux 4.4.0-89-generic).

# Performing check of passwd files...
# Checking entries from /etc/passwd.
--WARN-- [pass014w] Login (bob) is disabled, but has a valid shell.
--WARN-- [pass014w] Login (root) is disabled, but has a valid shell.
--WARN-- [pass015w] Login ID sync does not have a valid shell (/bin/sync).
--WARN-- [pass012w] Home directory /nonexistent exists multiple times (3) in
         /etc/passwd.
--WARN-- [pass012w] Home directory /run/systemd exists multiple times (2) in
         /etc/passwd.
--WARN-- [pass006w] Integrity of password files questionable (/usr/sbin/pwck
         -r).

# Performing check of group files...

# Performing check of user accounts...
# Checking accounts from /etc/passwd.
--WARN-- [acc021w] Login ID dnsmasq appears to be a dormant account.
--WARN-- [acc022w] Login ID nobody home directory (/nonexistent) is not
         accessible.

# Performing check of /etc/hosts.equiv and .rhosts files...

# Checking accounts from /etc/passwd...

# Performing check of .netrc files...

# Checking accounts from /etc/passwd...

# Performing common access checks for root (in /etc/default/login, /securetty, and /etc/ttytab...
--WARN-- [root001w] Remote root login allowed in /etc/ssh/sshd_config

# Performing check of PATH components...
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
# Only checking user 'root'

# Performing check of anonymous FTP...

# Performing checks of mail aliases...
# Checking aliases from /etc/aliases.

# Performing check of `cron' entries...
--WARN-- [cron005w] Use of cron is not restricted

# Performing check of 'services' ...
# Checking services from /etc/services.
--WARN-- [inet003w] The port for service ssmtp is also assigned to service
         urd.
--WARN-- [inet003w] The port for service pipe-server is also assigned to
         service search.

# Performing NFS exports check...

# Performing check of system file permissions...
--ALERT-- [perm023a] /bin/su is setuid to `root'.
--ALERT-- [perm023a] /usr/bin/at is setuid to `daemon'.
--ALERT-- [perm024a] /usr/bin/at is setgid to `daemon'.
--WARN-- [perm001w] The owner of /usr/bin/at should be root (owned by daemon).
--WARN-- [perm002w] The group owner of /usr/bin/at should be root.
--ALERT-- [perm023a] /usr/bin/passwd is setuid to `root'.
--ALERT-- [perm024a] /usr/bin/wall is setgid to `tty'.

# Checking for known intrusion signs...
# Testing for promiscuous interfaces with /bin/ip
# Testing for backdoors in inetd.conf

# Performing check of files in system mail spool...

# Performing check for rookits...
# Running chkrootkit (/usr/sbin/chkrootkit) to perform further checks...
--WARN-- [rootkit004w] Chkrootkit has detected a possible rootkit installation
Possible Linux/Ebury - Operation Windigo installetd

# Performing system specific checks...
# Performing checks for Linux/4...

# Checking boot loader file permissions...
--WARN-- [boot02] The configuration file /boot/grub/menu.lst has group
         permissions. Should be 0600
--FAIL-- [boot02] The configuration file /boot/grub/menu.lst has world
         permissions. Should be 0600
--WARN-- [boot06] The Grub bootloader does not have a password configured.

# Checking for vulnerabilities in inittab configuration...

# Checking for correct umask settings for init scripts...
--WARN-- [misc021w] There are no umask entries in /etc/init.d/rcS

# Checking Logins not used on the system ...

# Checking network configuration
--FAIL-- [lin013f] The system is not protected against Syn flooding attacks
--WARN-- [lin017w] The system is not configured to log suspicious (martian)
         packets

# Verifying system specific password checks...

# Checking OS release...
--WARN-- [osv004w] Unreleased Debian GNU/Linux version `stretch/sid'

# Checking installed packages vs Debian Security Advisories...

# Checking md5sums of installed files

# Checking installed files against packages...
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/udev/hwdb.bin' does not belong to any package.

# Performing check of root directory...

# Checking device permissions...
--WARN-- [dev003w] The directory /dev/block resides in a device directory.
--WARN-- [dev003w] The directory /dev/char resides in a device directory.
--WARN-- [dev003w] The directory /dev/cpu resides in a device directory.
--FAIL-- [dev002f] /dev/fuse has world permissions
--WARN-- [dev003w] The directory /dev/hugepages resides in a device directory.
--FAIL-- [dev002f] /dev/kmsg has world permissions
--WARN-- [dev003w] The directory /dev/lightnvm resides in a device directory.
--WARN-- [dev003w] The directory /dev/mqueue resides in a device directory.
--FAIL-- [dev002f] /dev/rfkill has world permissions
--WARN-- [dev003w] The directory /dev/vfio resides in a device directory.

# Checking for existence of log files...
--FAIL-- [logf005f] Log file /var/log/btmp permission should be 660
--FAIL-- [logf007f] Log file /var/log/messages does not exist

# Checking for correct umask settings for user login shells...
--WARN-- [misc021w] There is no umask definition for the dash shell
--WARN-- [misc021w] There is no umask definition for the bash shell

# Checking symbolic links...

# Performing check of embedded pathnames...
20:47> Security report completed for username.

More on Tripwire here.

Hardening PHP

Hardening PHP config (and backing the PHP config it up), first create an info.php file in your website root folder with this info

<?php
phpinfo()
?>

Now look for what PHP file is loadingPHP Config

Back that your PHP config file

TIP: Delete the file with phpinfo() in it as it is a security risk to leave it there.

TIP: Read the OWASP cheat sheet on using PHP securely here and securing php.ini here.

Some common security changes

file_uploads = On
expose_php = Off
error_reporting = E_ALL
display_errors          = Off
display_startup_errors  = Off
log_errors              = On
error_log = /php_errors.log
ignore_repeated_errors  = Off

Don’t forget to review logs, more config changes here.

Antivirus

Yes, it is a good idea to run antivirus in Ubuntu, here is a good list of antivirus software

I am installing ClamAV as it can be installed on the command line and is open source.

sudo apt-get install clamav

ClamAV help here.

Scan a folder

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Setup auto-update antivirus definitions

sudo dpkg-reconfigure clamav-freshclam

I set auto updates 24 times a day (every hour) via daemon updates.

tip: Download manual antivirus update definitions. If you only have a 512MB server your update may fail and you may want to stop fresh claim/php/nginx and mysql before you update to ensure the antivirus definitions update. You can move this to a con job and set this to update at set times over daemon to ensure updates happen.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v
Current working dir is /var/lib/clamav
Max retries == 5
ClamAV update process started at Tue Aug  8 22:22:02 2017
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1152
Software version from DNS: 0.99.2
Retrieving http://db.au.clamav.net/main.cvd
Trying to download http://db.au.clamav.net/main.cvd (IP: 193.1.193.64)
Downloading main.cvd [100%]
Loading signatures from main.cvd
Properly loaded 4566249 signatures from new main.cvd
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Querying main.58.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/daily.cvd
Trying to download http://db.au.clamav.net/daily.cvd (IP: 193.1.193.64)
Downloading daily.cvd [100%]
Loading signatures from daily.cvd
Properly loaded 1742284 signatures from new daily.cvd
daily.cvd updated (version: 23644, sigs: 1742284, f-level: 63, builder: neo)
Querying daily.23644.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/bytecode.cvd
Trying to download http://db.au.clamav.net/bytecode.cvd (IP: 193.1.193.64)
Downloading bytecode.cvd [100%]
Loading signatures from bytecode.cvd
Properly loaded 66 signatures from new bytecode.cvd
bytecode.cvd updated (version: 308, sigs: 66, f-level: 63, builder: anvilleg)
Querying bytecode.308.82.1.0.C101C140.ping.clamav.net
Database updated (6308599 signatures) from db.au.clamav.net (IP: 193.1.193.64)

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart 

sudo /etc/init.d/clamav-freshclam start

Manual scan with a bash script

Create a bash script

mkdir /script
sudo nano /scripts/updateandscanav.sh

# Include contents below.
# Save and quit

chmod +X /scripts/updateandscanav.sh

Bash script contents to update antivirus definitions.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart

sudo /etc/init.d/clamav-freshclam start

sudo clamscan --max-filesize=3999M --max-scansize=3999M -v -r /

Edit the crontab to run the script every hour

crontab -e
1 * * * * /bin/bash /scripts/updateandscanav.sh > /dev/null 2>&1

Uninstalling Clam AV

You may need to uninstall Clamav if you don’t have a lot of memory or find updates are too big.

sudo apt-get remove --auto-remove clamav
sudo apt-get purge --auto-remove clamav

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Other

  • Read this awesome guide.
  • install Fail2Ban
  • Do check your log files if you suspect suspicious activity.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.92 added hardening a linux server link

Filed Under: Ads, Advice, Analitics, Analytics, Android, API, App, Apple, Atlassian, AWS, Backup, BitBucket, Blog, Business, Cache, Cloud, Community, Computer, CoronaLabs, Cost, CPI, DB, Development, Digital Ocean, DNS, Domain, Email, Feedback, Firewall, Free, Git, GitHub, GUI, Hosting, Investor, IoT, JIRA, LetsEncrypt, Linux, Malware, Marketing, mobile app, Monatization, Monetization, MongoDB, MySQL, Networking, NGINX, NodeJS, NoSQL, OS, Planning, Project, Project Management, Psychology, push notifications, Raspberry Pi, Redis, Route53, Ruby, Scalability, Scalable, Security, SEO, Server, Share, Software, ssl, Status, Strength, Tech Advice, Terminal, Transfer, Trello, Twitter, Ubuntu, Uncategorized, Video Editing, VLOG, VM, Vultr, Weakness, Web Design, Website, Wordpress Tagged With: antivirus, brute force, Firewall

Using phpservermonitor.org to check whether your websites and servers are up and running

July 30, 2017 by Simon

https://www.phpservermonitor.org/ – PHP Server Monitor is a script that checks whether your websites and servers are up and running. It comes with a web based user interface where you can manage your services and websites, and you can manage users for each server with a mobile number and email address.

Features

  • Monitor services and websites (see below).
  • Email, SMS and Pushover notifications.
  • View history graphs of uptime and latency.
  • User authentication with 2 levels (administrator and regular user).
  • Logs of connection errors, outgoing emails and text messages.
  • Easy cronjob implementation to automatically check your servers.

FYI you can setup an Ubuntu Vutur VM here (my guide here) or a Digital Ocean server here (my guide here)  in minutes (and only be charged by the hour). Vultr VMs can be purchased from as low a $2.5 a month (NY location) and Digital Ocean for $5 a month.

Open Source

PHP Server Monitor is an open source project 🙂

https://github.com/phpservermon/phpservermon

Installation

fyi: Installation instructions are located here.  More detailed install instructions can be found in the zip file under docs/install.rst.

Go to https://www.phpservermonitor.org/download/ and download the  2.4MB phpservermon-3.2.0.zip then extract it’s 1,0834 items.

Upload the files to your website.

Run the install script https://thesubdomain.thedomain.com/phpservermon-3.2.0/install.php then follow the prompts.

I have already set my time zone so I’ll ignore this warning.

If you want to change the time zone run this command.

sudo hwclock --show
dpkg-reconfigure tzdata
sudo reboot
sudo hwclock --show

Then add the database details. I created the MySQL database and user using the Adminer utility.

I created a config.php as instructed.

<?php
define('PSM_DB_HOST', 'localhost');
define('PSM_DB_PORT', '3306');
define('PSM_DB_NAME', 'thedatabase');
define('PSM_DB_USER', 'thedatabaseuser');
define('PSM_DB_PASS', 'removed');
define('PSM_DB_PREFIX', 'psm_');
define('PSM_BASE_URL', 'https://thesubdomain.thedomain.com/phpservermon-3.2.0');
?>

Create an account.

Installation Success.

I logged into the pro server monitor webpage that I just installed.

Configuration

I logged into the PHP Server monitor and configured a website to monitor ( at /phpservermon-3.2.0/?&mod=server&action=edit ).

I added this string to the HTML source of the webpages pages to monitor.

<!-- phpservermoncheckforthis -->

I added a few websites to monitor.

Other

Here are the other things you can monitor

Table of objects to monitor

Here is my tale of objects to monitor,

Here is a table of my active servers being monitored (I am monitoring 3x web page content and IP pings).

One is failing because the page does not contain the string I defined 🙂

Integration with custom status pages

todo.

Configure SMS Alerts

The config screen has multiple SMS providers to choose from.


Configure Pushover Alerts

The config screens have links to create a pushover alerts app.


Automation

todo:  Review crontab.

Recent Feaures

  • SSL expiration checks

Conclusion

I am happy with the way PHP Server monitor easily monitors my websites.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.3 added screenshots of SMS and pushover  (6:04pm 30th July 2017 AEST)

Filed Under: AWS, Cloud, Digital Ocean, Domain, Hosting, Linux, Status, VM Tagged With: monitor, server, status

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

How to develop software ideas

July 9, 2017 by Simon

I was recently at a public talk by Alan Jones at the UNE Smart Region Incubator where Alan talked about launching startups and developing ideas.

Alan put it quite eloquently that “With change comes opportunity” and we are all very capable of building the next best thing as technological barriers and costs are a lot lower than 5 years ago but Alan also mentioned 19 start-ups-ups fail but “if you focus on solving customer problems you have a better chance of succeeding”. Regions need to share knowledge and you can learn from other peoples mistakes.”

I was asked after this event to share thoughts on “how do I learn to develop an app” and “how do you get the knowledge”. Here is my poor “brain dump” on how to develop software ideas (It’s hard to condense 30 years experience developing software). I will revise this post over the coming weeks so check back often.

If you have never programmed before check out this programming 101 guides here.

I have blogged on technology/knowledge things in the past at www.fearby.com and recently I blogged about how to develop cloud-based services (here, here, here, here and here) but this blog post assumes you have a validated “app idea” and you want to know how to develop yourself. If you do not want to develop an app yourself you may want to speak with Blue Chilli.

Find a good mentor.


True App Development Quotes

  • Finding development information is easy, following a plan is hard.
  • Aim for progress and not perfection.
  • Learn one thing at a time (Multitasking can kill your brain).
  • Fail fast and fail early and get feedback as early as possible from customers.
  • 10 engaged customers are better than 10,000 disengaged users.

And a bit of humour before we start.

Project Mangement Lol

(click for larger image)

Here is a funny video on startup/entrepreneur life/lingo


This is a good funny, open and honest video about programming on YouTube.

Follow Seth F Samuel on twitter here.

Don’t be afraid to learn from others before you develop

My fav tips from over 200 failed startups (from https://www.cbinsights.com/blog/startup-failure-post-mortem/ )

  • Simpler websites shouldn’t take more than 2-3 months.You can always iterate and extrapolate later. Wet your feet asap
  • As products became more and more complex, the performance degrades. Speed is a feature for all web apps. You can spend hundreds of hours trying to speed of the app with little success. Benchmarking tools incorporated into the development cycle from the beginning is a good idea
  • Outsource or buy in talent if you don’t know something (e.g marketing). Time is money.
  • Make an environment where you will be productive. Working from home can be convenient, but often times will be much less productive than a separate space. Also it’s a good idea to have separate spaces so you’ll have some work/life balance.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • We most definitely committed the all-too-common sin of premature scaling. Driven by the desire to hit significant numbers to prove the road for future fundraising and encouraged by our great initial traction in the student market, we embarked on significant work developing paid marketing channels and distribution channels that we could use to demonstrate scalable customer acquisition. This all fell flat due to our lack of product/market fit in the new markets, distracted significantly from product work to fix the fit (double fail) and cost a whole bunch of our runway.
  • If you’re bootstrapping, cash flow is king. If you want to possibly build a product while your revenue is coming from other sources, you have to get those sources stable before you can focus on the product.
  • Don’t multiply big numbers. Multiply $30 times 1.000 clients times 24 months. WOW, we will be rich! Oh, silly you, you have no idea how hard it is to get 1.000 clients paying anything monthly for 24 months. Here is my advice: get your first client. Then get your first 10. Then get more and more. Until you have your first 10 clients, you have proved nothing, only that you can multiply numbers.
  • Customers pay for information, not raw data. Customers are willing to pay a lot more for information and most are not interested in data. Your service should make your customers look intelligent in front of their stakeholders. Follow up with inactive users. This is especially true when your service does not give intermediate values to your users. Our system should have been smarter about checking up on our users at various stages.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.

Here are my tips on staying on track developing apps. What is the difference between a website, app, API, web app, hybrid app and software (my blog post here)?

I have seen quite a few projects fail because:

  • The wrong technology was mandated.
  • The software was not documented (by the developers).
  • The software was shelved because new developers hated it or did not want to support it.

Project Roles (hats)

It is important to understand the roles in a project (project management methodology aside) and know when you are being a “decision maker” or a “technical developer”. A project usually has these roles.

  • Sponsor/owner (usually fund the project and have the final say).
  • Executive/Team leader/scrum master (manage day to day operations, people, tasks and resources).
  • Team members (UI, UX, Marketers, Developers (DevOps, Web, Design etc) are usually the doers.
  • Stakeholders (people who are impacted (operations, owners, Helpdesk)).
  • Subject Matter Experts (people who should guide the work and not be ignored).
  • Testers (people who test the product and give feedback).

It can be hard as a developer to switch hats in a one-person team.

How do you develop and gain knowledge?

First, document what you need to develop (what problem are you solving and what value will your idea bring). Does this solution exist already? Don’t solve a problem that already exists.

Developing software is not hard, you just need to be logical, research, be patient and follow a plan. The hardest part can be gluing components together.

I like to think of developing software like making a car if you need 4 wheels do you have 4 wheels? If you want to build it yourself and save some money can you make wheels (make rubber strips with steel reinforced/vulcanized rubber, make alloys and add bearings and have them pass regulations) or should you buy wheels (some things are cheaper to make than other things)? Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks.  Developing software can lead you down a rabbit hole of endless research, development, and testing if you don’t know what you are doing.

Examples 1:

I “need a webpage”:

  • Research: Will Wix, Shopify or a hosted WordPress website do (is it flexible or cheap enough) or do I install WordPress (guide here) or do I  learn and build an HTML website and buy a theme and modify it (and have a custom/flexible solution)?

Example 2:

I “need an iPhone and Android app”:

Research: You will need to learn iOS and Android programming and you may need a server or two to hold the apps data, webpage and API. You will also need to set up and secure the servers or choose to install a database or go with a “database as a service” like cloud.mongodb.com or google firebase.

Money can buy anything (but will it be flexible/cheap enough), time can build anything (but will it be secure enough).

Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks but developing software can lead you down a rabbit hole of endless research, development and testing if you don’t know what you are doing.

Almost all systems will need a central database to store all data, you can choose a traditional relational SQL database or a newer NoSQL database. MySQL is a good/cheap relational SQL database and MongoDB is a good NoSQL database. You will need to decide on how your app talks to the database (directly or via an API (protected by OAuth or limited access tokens)).  It is a bad idea to open a database directly to the world with no security. Sites like www.shodan.io will automatically scan the Internet looking for open databases or systems and report this as an insecure site to anyone. It is in your interest to develop secure systems in all stages of development.

CRUD (Create, Read, Update and Delete) is a common group of database tasks that you can do to prove you can read, write, update and delete from a database. While performing CRUD operations is a good to benchmark to also see how fast the database it.  if a database is the slowest link then you can use memory to cache database values (read my guide here). Caching can turn a cheap server into a faster server. Learning by doing can quickly build skills so “research”, “do” and “learn”.

Most solutions will need a website (and a web server). Here is a good article comparing Apache and Nginx (the leading open source web servers).

Stacks and Technology – There are loads of development environments (stacks), frameworks and technologies that you can choose. Frameworks supposedly make things easier and faster but frameworks and technologies change (See 2016 frameworks to learn guide and 2017 frameworks to learn guide) frequently (and can be abandoned). Frameworks supposedly make things easier and faster but be careful most frameworks run 30% slower than raw server-side and client code. I’d recommend you learn a few technologies like NGINX, NodeJS, PHP and MySQL and move up from there.

The Mean Stack is a  popular web development platform (MEAN = MongoDB, ExpressJS, Angular and NodeJS.).

Apps can be developed for Apple platforms by signing up here (about $150 AUD a year) and using the XCode IDE. Apps can be developed for the Android Platform by using Android Studio (for about $20 (one-off fee)). Microsoft has a developer portal for the Windows Platform. Google also has an online scalable database as a service called Firebase. If you look hard enough you will find a service for everything but connecting those services can be timely, costly or make security and a scalable solution impossible so beware of using as-a-service platforms. I used the Corona SDK to develop an app but abandoned the platform due to changes in the vendor’s communication and enforced policies.

If you are not sure don’t be afraid of ask for help on Twitter.

Twitter is awesome for finding experts

Recent twitter replies to a problem I had.

Learning about new Technology and Stacks

To build the knowledge you need to learn stuff, build stuff, test (benchmark), get feedback and build more stuff. I like to learn about new technology and stacks by watching Udemy courses and they have a huge list of development courses (Web Development, Mobile Apps, Programming Languages, Game Development, Databases,  Software Testing,  Software Engineering etc).

I am currently watching a Practical iOS 11 course by Stephen DeStefano on Udemy to learn about unreleased/upcoming features on the Apple iPhone (learning about XCode 9, Swift 4, What’s new in iOS 11, Drag and drop, PDF and ARKit etc).

Udemy is awesome (Udemy often have courses for $15).

If you want to learn HTML go to https://www.w3schools.com/.

https://devslopes.com/have a number or development related courses and an active community of developers in a chat system.

You can also do formal study via an education provider (e.g. Bachelor of computer sciences at UNE or Certificate IV in programming or Diploma in Software Development at TAFE).

I would recommend you use Twitter and follow keywords (hashtags) around key topics (e.g #www, #css, #sql, #nosql, #nginx, #mongodb, #ios, #apple, #android, #swift, #objectivec, #java, #kotlin) and identify users to follow. Twitter is great for picking up new information.

I follow the following developers on YouTube (TheSwiftGuy, AppleProgrammer, AwesomeTuts, LetsBuildThatApp, CodingTech etc)

Companies like https://www.civo.com/ offer developer-friendly features with hosting, https://www.pebbled.io/ offer to develop for you and https://serverpilot.io/ help you spin up software on hosting providers.

What To Develop

First, you need to break down what you need. (e.g ” I want an app for iOS and Android in 5 months that does XYZ. The app must be secure and be fast. Users must be able to register an account and update their profile”).

Choosing how high to ensure your development project scales depends on your peak expected/active concurrent users (ratio of paying and free users). You can develop your app to scale very high but this may cost more money initially, it can be bad to pay to ensure scalability early. As long as you have a good product and robust networking/retry routines and UI you don’t need to scale high early.

Once you know what you need you can search the open-source community for code that you can use. I use Alamofire for iOS network requests, SwiftyJSON for processing JSON data and other open-source software. The only downside of using open source software is it may be abandoned by the creators and break in the future. Saving your time early may cost you time later.

Then you can break down what you don’t want. (e.g “I don’t want a web app or a windows phone or windows desktop app”). From here you will have a list of what you need and what you can avoid.

You will also need to choose a project management methodology (I have blogged about this here). Having a list of action item’s and a plan and you can work through developing your app.

While you are researching it is a good idea to develop smaller fun projects to refine your skills.  There are a number of System Development Life Cycles (SDLC’s) but don’t worry if you get stuck, seek advice or move on. It is a  good idea to get users beta testing your app early and seek feedback. Apple has the TestFlight app where you can send beta versions of apps to best testers. Here is a good guide on Android beta testing.

If you are unsure about certain user interface options or features divide your beta testers and perform A/B or split testing to determine the most popular user interfaces. Capturing user data and logs can also help with debugging and user usage actions.

Practice

Develop smaller proof of concept apps in new technologies or frameworks and you will build your knowledge and uncover limitations in certain frameworks and how to move forward with confidence. It is advisable to save your source code for later use and to share with others.

I have shared quite a bit of code at https://simon.fearby.com/blog/ that I refer to from time to time. I should have shared this on GitHub but I know Google will find this if people want it.

Get as much feedback as you can on what you do and choose (don’t trust the first blog post you read (me included)).

Most companies offer Webinars on their products. I like the NGINX webinars. Tutorialspoint have courses on development topics. Sitepoint is a  good development site that offers free books, courses, and articles. What are API’s information by Programmable web.

You may want to document your application flow to better understand how the user interface works.

Useful Tools

Balsamic Mockups and Blueprint are handy for mocking up applications.

C9.io is a great web-based IDE that can connect to a VM on AWS or Digital Ocean.  I have a guide here on connecting Cloud 9 to an AWS VM here.

I use the Sublime Text 3 text editor when editing websites locally.

(image courtesy of https://www.sublimetext.com/ )

I use the Mac Paw app to help test API’s I develop locally.

(image courtesy of https://paw.cloud )

Snippets is a great application for the Mac for storing code snippets.

I use the Cornerstone Subversion app for backing up my code on my Mac.

Webservers: https://www.iis.net/IIS Webserver, NGINX Webserver, Apache Webserver.

NodeJS programming manual and tutorials.

I use Little Snitch (guide here) for simulating network down in app development.

I use the Forklift file manager on OSX.

Databases: SQL tutorials, NoSQL Tutorials, MySQL documentation.

Siege is a command-line HTTP load testing tool.

CPU Busy

http://loader.io/ is a nice web-based benchmarking tool.

Bootstrap is an essential mobile responsive framework.

Atlassian Jira is an essential project tracking tool. More on Agile Epics v Stories v Tasks on the Atlassian community website here. I have a post on developing software and staying on track here using Jira.

Jsfiddle is a good site that allows you to share code you are working on or having trouble with.

Dribbble is a “show and tell” site for designers and creatives.

Stackoverflow is the go-to place to ask for help.

Things I care about during development phases.

  • Scalability
  • Flexibility
  • Risk
  • Cost
  • Speed

Concentrating too much on one facet can risk exposing other facets. Good programmers can recommend a deliver a solution that can be strong in all areas ( I hate developing apps that are slow but secure or scalable and complex).

Platforms

You can signup for online servers like Azure, AWS (my guide here) or you can use a cheaper CPanel based hosting. Read my guide on the costs of running a cloud-based service.

Use my link to get a free Digital Ocean server for two months by using this link. Read my blog post here to help setup you VM. You can always use Ubuntu on your local machine to use Ubuntu (read my guide here). Don’t forget to use a GIT code repository like GitHub or Bitbucket.

Locally you can install Ubuntu (developers edition) and have a similar environment as cloud platforms.

Lessons Learned

  • Deploy servers close to the customers (Digital Ocean is too far away to scale in Australia).
  • Accessibility and testing (make things accessible from the start).
  • Backup regularly (Use GIT, backup your server and use Rsync to copy files to remote servers and use services like backblaze.com to backup your machine).
  • Transportability of technology (Use open technology and don’t lock yours into one platform or service).
  • Cost (expensive and convenient solutions may be costly).
  • Buy in themes and solutions (wrapbootstrap.com).
  • Do improve what you have done (make things better over time). Thing progress and not perfection.

There is no shortage of online comments bagging certain frameworks or platforms so look for trends and success stories and don’t go with the first framework you find. Try candidate frameworks and services and make up your own mind.

A good plan, violently executed now, is better than a perfect plan next week. – General George S. Patton

Costs

Sometimes cost is not the deciding factor (read my blog post on Alibaba cloud). You should estimate your apps costs per 1000 users. What do light v heavy users cost you? I have a blog post on the approx cost of cloud services.  I started researching a scalable NoSQL platform on IBM Cloudant and it was going to cost $4,000 USD a month and integrating my own App logic and security was hard. I ended up testing MongoDB Cloud where I can scale to three servers for $80 a month but for now, I am developing my current project on my own AWS server with MongoDB instance. Read my blog post here on setting up MongoDB and read my blog post on the best MongoDB GUI.

Here is a great infographic for viewing what’s involved in mobile app development.

You can choose a number of tools or technologies to achieve your goals, for me it is doing it economically, securely and in a scalable way that has predictable costs. It is quite easy to develop something that is costly, won’t scale or not secure or flexible. Don’t get locked into expensive technologies. For example, AWS has a user pays Node JS service called Lambada where you get Million of free hits a month and then you get charged $0.0000002 per request thereafter. This sounds good but I prefer fixed pricing/DIY servers better as it allows me to build my own logic into apps (this is more important than scalability).

Using open-source software of off the shelf solutions may speed things up initially? Will It slow you down later though? Ensure free solutions are complete and supported and Ensure frameworks are helping. Do you need one server or multiple servers (guide on setting up a distributed MySQL environment )? You can read about my scalability on a budget journey here. You can speed up a server in two ways Scale Up (Add more Mhz or CPU cores) or scale-out (add more servers).

Start small and use free frameworks and platforms but have a tested scale-up plan, I researched cheap Digital Ocean servers and moved to AWS to improve latency and tested MongoDB on Digital Ocean and AWS but have a plan to scale up to cloud.mongodb.com if need be.

Outsource (contractors) 

Remember outsourcing work tasks (or complete outsourcing of development) can buy you time and or deliver software faster. Outsourcing can also introduce risks and be expensive. Ask for examples of previous work and get raw numbers on costs (now and in the future) and concurrent users that a particular bit of outsourcing work will achieve.

If you are looking to outsource work do look at work that the person or company has done before (if is fast, compliant, mobile scalable, secure, robust, backup up, do you have rights to edit/own and own the IP etc). I’d be cautious of companies who say they can do everything and don’t show live demos.

Also, beware of restrictions on your code set by the contractors. Can they do everything you need (compare with your list of Moscow must haves)? Sometimes contractors only code or do what they are comfortable with that can impact your deliverables.

Do use a private Git repository (that you own) like GitHub or BitBucket to secure your code and use software like Trello or Atlassian JIRA to track your project. Insist the contractors use your repository to retain control.

You can always sell equity in your idea to an investor and get feedback/development from companies like Bluechilli.

Monetization and data

Do have multiple monetization streams (initial app purchase cost, in-app purchase, subscriptions, in-app credit, advertising, selling code/components etc). Monthly revenue over yearly subscription works best to ensure cash flow.

Capture usage data and determine trends around successful engagement, Improve what works. Use A/B testing to roll out new features.

I like Backblaze post on getting your first 1,000 customers.

Maintenance, support risk and benefits

Building your own service can be cheaper but also riskier if you fail to secure an app you are in trouble if you cannot scale you are in trouble. If you don’t update your server when vulnerabilities come out you are in trouble. Also, Google on monetization strategies. Apple apps do appear to deliver more profits over Android. Developers often joke “Apple devices offer 90% of the profits and 10% of the problems and Android apps offer 90% of the problems and 10% of the profits”.

Also, Apple users tend to update to the latest operating system sooner where Android devices are rather fragmented.

Do inform you users with self-service status pages and informative error messages and don’t annoy users.

Use Free Trials and Credit

Most vendors have free trials so use them

https://aws.amazon.com/free/AWS have 12 month free tiers.

Use this link to get two months free with Digital Ocean.

Microsoft Azure also give away free credit.

Google cloud also have free credit.

Don’t be afraid to ask.

MongoDB Cloud also gives away free credit if you ask.

Security

Sites like Shodan.io will quickly reveal weaknesses in your server (and services), this will help you build robust solutions from the start before hackers find them. Read https://www.owasp.org/index.php/Main_Page to know h0w to develop secure websites. Listen to the SecurityNow podcast to learn how the technology works and is broken. Following TroyHunt is recommended to keep up to date with security in general. @0xDUDE is a good ethical hacker to follow to stay up-to date on security exploits also @GDI_FDN is a good non-profit organization that helps defend sites that use open source software.

White hack hackers exist but so do black hat ones.

Read the Open Web Application Security site here. Read my guide on setting up public key pinning in security certificates here.

I use the ASafaWeb site to test your sites from common ASP security flaws. If you have a secure certificate on your site you will need to ensure the certificate is secure and up to date with the SSL Labs SSL Test site.

SSL Cert

Once your websites IP address is known (get it from SSL Labs) run a scan over your site with https://www.shodan.io/ to find open ports or security weaknesses.

Shodan.io allows you and others to see public information about your server and services. You can read about well-known internet ports here.

Anyone can find your server if you are running older (or current) web servers and or services.

It is a  good idea to follow security researchers like Steve Gibson and Troy Hunt and stay up to date with live exploits. http://blog.talosintelligence.com is also a good site for reading technical breakdowns of exploits.

Networking

Do share and talk about what you do with other developers. You can learn a lot from other developers and this can save you loads of time and mistakes. True developers love talking about their code and solutions.

Decision Making

Quite a lot of time can be spent on deciding on what technology or platform to use, I decide by factoring in cost, risk and security over flexibility, support and scalability. If I need flexibility, lower support or scalability then I’ll choose a different technology/platform. Generally, technology can help with support. Scalable solutions need effort from start to finish (it is quite easy to slow down any technology or service).

Don’t be afraid to admit you have chosen the wrong technology or platform. It is far easier to research and move on than live with poor technology.

If you have chosen the wrong technology and stick with it, you (and others) will loath working with it (impacting productivity/velocity).  Do you spend time swapping technology or platforms now or be less productive later?

Intellectual property and Trademarks

Ensure you search international trademarks for your app terms before you start using them. The Australian ATO has a good Australian business name checker here.

https://namechk.com/ is also a good place to search for your app ideas name before you buy or register any social media accounts.

Using https://namechk.com/ you can see “mystartupidea” name is mostly free.

And the name “microsoft’ is mostly taken.

Seek advice from a start-up experts from https://www.bluechilli.com/ like Alan Jones.

See my guide on how to get useful feedback for your ideas here.

Tips

  1. Use Git Source Control systems like GitHub or Bitbucket from the start and offsite backup your server and environments frequently. Digital Ocean charges 20% of your servers costs to back it up. AWS has multiple backup offerings.
  2. Start small and scale up when needed.
  3. Do lots of research and test different platforms, frameworks, and technologies and you will know what you should choose to develop with.

(Image above found at http://startupquotes.startupvitamins.com/ Follow Startup Vitamins on Twitter here.).

You will know when you are a developer when you have gained knowledge and experience and can automatically avoid technologies that will not fit a  solution.

Share

Don’t be afraid to share what you know (read my blog post on this here). Sharing allows you to solidify your knowledge and get new information. Shane Bishop from EWWW Image Optimizer  WordPress plugin wrote Setting up a fast distributed MySQL environment with SSL for us. If you have something to share on here please let me know here on twitter.

It’s never too late to do

One final tip is knowledge is not everything, planning and research is key, a mind that can’t develop may be better than a mind that can because they have no experience (or baggage) and may find faster ways to do things. Thanks to http://zachvo.com/ for teaching me this during a recent WordPress re-deployment. Sometimes the simplest solution is.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

DRAFT: 1.86 added short link

Short: https://fearby.com/go2/develop/

Filed Under: Advice, Android, Apple, Atlassian, Backup, BitBucket, Blog, Business, Cloud, CoronaLabs, Cost, Development, Domain, Firewall, Free, Git, GitHub, Hosting, JIRA, mobile app, MySQL, Networking, NodeJS, OS, Project Management, Scalability, Scalable, Security, Server, Software, Status, Trello, VM Tagged With: ideas

Alibaba Cloud how good is it?

June 18, 2017 by Simon

I recently clicked an Alibaba Cloud add Banner on Facebook that said Alibaba Cloud offers a cloud server (with 40gb SSD, 1x CPU, 1GB Ram and 1TB traffic for $30 a year). I am familiar with Alibaba (Chinese AWS equivalent) and the owner Jack Ma (and his mindset). The thing that got me interested was hosting was available in Sydney (Something that Digital Ocean does not offer).  This hosting deal seems too good to be true. Digital Ocean has similar hosting plans but half the storage for $5 a month (but are not hosted in Australia). I asked Alibaba on Twitter (@alibaba_cloud) some questions last Sunday and got an answer on Friday evening over 5 days later.

If You sign up for Alibaba Cloud here you will get $300 of New User Free Credit to use in 60 days.

I deploy servers on the awesome Vultr platform, you can deploy a server for as low as $2.5 a month, read my guide for setting up a Vultr server here. I love the Vultr management interfaces and support, that’s why I use them. This server is on a Vultr server and it’s fast. .

Or deploy an SSD cloud server like Digital Ocean in 55 seconds. Sign up using my link and receive $10 in credit (2 free months): https://www.digitalocean.com

Read my other guides

    • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • The quickest way to setup a scalable development ide and web server
    • Adding a commercial SSL certificate to a Digital Ocean VM
    • Application scalability on a budget (my journey)

Alibaba Cloud

Alibaba seems to have similar products as Amazon Web Services.
alibaba001

After you click Create Free Account you will need to enter a credit or debit card.
alibaba002

You need to verify an email address and mobile number by verifying random tokens (that you will receive).
alibaba003

An email verify token will look like this.
alibaba004

When tour account is created you will be sent an email that states.

“Dear username,

Congratulations for successfully registering an Alibaba Cloud account! Your account ID is:[email protected] _____.___. Please remember to change your password on a regular basis and keep all account and password information secure.

Please remember to change your password on a regular basis and keep all account and password information secure.
Starting from today, you have 60 days to try and test Alibaba Cloud products with $300 New User Free Credit! The Free Credit will be automatically delivered to your account once you have successfully added a verified payment method. Follow this link to add your payment method and necessary information.”
alibaba005

You will need to add a billing address.
alibaba006

You will need to add a payment method.
alibaba007

Once the payment information is added you will need o to verify this. Alibaba says they will deposit a  verification code and a set amount into your bank account that you will need to obtain to verify your account (fair enough).

alibaba008

The problem is I had two deposits and two codes that both did not work and the format of the deposit descriptions was not as expected (I was expecting “INTL* #####” and received “######  INTL*”, I tried both of these codes and neither worked).
alibaba009 error - multiple

The codes would not work.
alibaba010

Error: “The verification code is incorrect. You have 4 attempts left to verify.”.

alibaba011_verify_bank_failed

After 4 attempts I was blocked.
alibaba012_verify_bank_failed

I now sought to report this problem by logging a support ticket? In the Alibaba support area, it appeared I had a “Basic” account that may not have support rights as I could see a “Purchase Support Plan” link?
alibaba013_trying_to_report_error

I found the create support ticket link and proceeded to attach the support related screenshots. Apparently tiff files (Mac default screenshots format are not allowed).
alibaba014_unsupported_attahment_format

I proceeded to convert my screenshots to png files and attach then.
alibaba015_reported_verification_error

The support ticket was logged.
alibaba016_waiting_for_response

I tried again to add and verify my payment method.
alibaba017_readded_payment_card

Again I received two invalid verification deposits and codes.
alibaba018_multiple_verify_tokens_again_and_none_at_012c

Alibaba billing has a few FAQ articles that may or may not hep me

Why have I received two charges to my account?

Alibaba sent both a micocharge and an authorization hold to your card during signup. The authorization hold verifies that your card details are valid and that you do not have a zero balance. The authorization hold is always 1.00 USD. The microcharge is used to verify that you are the owner of the card, and can be any amount less than 1.00 USD. Both charges will be returned to you within 24 hours.

What if my card statement is not in US Dollars?

Users outside of the United States may not be able see the original USD amount of the microcharge in their bank statement because they are billed in local currency. In this case, you may opt to use the accompanying 6-digit transaction code displayed in the transaction description in order to complete account verification.

Alas I am still stuck verifying my payment method.
alibaba019

Waiting for support

A full business day later and I am still waiting for feedback on the ticket.

alibaba021

Support Response

Support said I need to NOT enter the transaction ID and enter the amount instead.

alibaba022

I tried that with no luck 🙁  I have removed m payment details a will stick with Digital Ocean.

Alibaba

It appears Alibaba still have issues with multiple bank verification ID’s being sent. I have deleted my payment car from Alibaba and will stick with Digital Ocean and AWS.  I guess I can try and pay Alibaba for more money help, but no.

Alibaba Support >If you want higher level support, refer to Alibaba Cloud Support Plans page https://intl.aliyun.com/support/after-sales

Do they realise I am not a US citizen and entering the amount will not work?

Alibaba Support (Official)

Support Engineer : Dear customer, we apologize for the bad experience you have suffered in micro charge verification, we have engaged the product and development team to make the feature enhancement.

Some of the bank may do not display the USD and random code in the billing statement, we are contacting the bank for further investigation. Currently, you have deleted the adding in the console, would you please try to add the card again to get the authorization & micro charge?

If you have problem in verification, please provide the screenshot of both authorization hold & micro charge in billing statement, we can bypass the micro charge for you after evaluation.

Feel free to contact us if there are any other questions, thanks for your support.

Alibaba Pros

  • Australian Hosting Locations
  • Robust Signup Verifications (email, phone and bank), so much so I am not verified.
  • Cheap (if you can get in)???

Alibaba Cons

  • Broken Signup
  • I can’t create a VM due to signup verification
  • No Authenticator Two-Factor Authentication at login.
  • Alibaba Cloud iOS app is not available in English
  • Support is lacking.

Alibaba Cloud iOS App



Conclusion

I don’t know how good Alibaba Cloud is as it appears the signup is broken for Australian customers. For now, stick with Digital Ocean and AWS.

I have received an email from Alibaba saying they need more information.

“> Please provide more information so that we can deal with the problem.
> You can log on Aliyun.com or click here to submit feedback.”  I tried Alibaba suggestion of entering the bank account verification account but that did not work. Alibaba needs better support.  Do they realise I am not a US citizen and entering the amount will not work?

If You sign up for Alibaba Cloud here you will get $300 of New User Free Credit.

Deploy a server on Vultr for as low as $2.5 a month. Read my Vultr setup guide here.

or

Easily deploy an SSD cloud server on @DigitalOcean in 55 seconds. Sign up using my link and receive $10 in credit (2 free months): https://www.digitalocean.com

Read my other guides

    • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • The quickest way to setup a scalable development ide and web server
    • Adding a commercial SSL certificate to a Digital Ocean VM
    • Application scalability on a budget (my journey)

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.6 Added Vultr info.

Filed Under: Backup, Hosting, IoT, Server, Tech Advice Tagged With: alibaba, hosting, vm

Digital disruption or digital tinkering

December 20, 2016 by Simon

The biggest buzzwords used by prime ministers, presidents or management these days has been “Innovation” and “digital disruption”.  As a developer or manager do you understand what goes into a new digital customer-focused service like an API or data-driven portal? How well is your business or products doing in the age of innovation and digital disruption? Do you listen to what your customers want or need?

When to Pivot

There comes a time when businesses realize they need to pivot in order to stay viable.

  • People don’t rent VHS movies they download movies from the Internet
  • Printing photographs, who does that anymore?
  • People learn from videos on YouTube, Khan Academy for free or pay for courses from, Pluralsight.com, Coursea.org, Udemy.com of Linda.com.
  • Information is wanted 24/7 and a call to customer service if information cannot be sourced online.

kodak-bankruptcy

Image source.

Chances are 90% of your customers are using mobile or tablet devices on any given day.  If you are not interacting with your customers via personalized/mobile technology prepare to be overtaken as a business.

Pivoting may require you to admit you are behind the eight ball and take a risk and set up a new customer-focused web portal, API, app or services.  Make sure you know what you need before the lure of services, buzzwords and  “shiny object syndrome” from innovation blog posts and consultants take hold.

Advocates and blockers of change

Creating change in an organization that bean counts every dollar, exterminates all risk and ignore ideas is a hard sell. How do you get support from this with power and endless rolls of red tape?

Bad reasons for saying no to innovation:

  • You can’t create a mobile app to help customers because the use of our logo won’t be approved.
  • Don’t focus on customer-focused automation, analytics and innovation because internal manual processes need attention first.
  • Possible changes in 2 years outside of our control will possibly impact anything we create.
  • Third eyelids.

Management’s support of experimentation and change is key to innovation. HARVARD BUSINESS REVIEW have a great post on this: The Why, What, and How of Management Innovation.

Does your organization value innovation? This is possibly the best video that describes how the best businesses focus on innovation and take risks. Simon Sinek: How great leaders inspire action.

Here is a great post on How To Identify The Most Dangerous Person In Your Company who blocks innovation and change.

Also a few videos on getting staff on board and motivation and productivity.

Project Perspectives
consultant_001

Project Focus

  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.

I said that three times (because it is important).

Before you begin coding, learn from those who have failed

Here are some of the best tips I have collected from start-ups who have failed.

  • We didn’t spend enough time talking with customers and we’re rolling out features that I thought were great, but we didn’t gather enough input from clients. We didn’t realize it until it was too late. It’s easy to get tricked into thinking your thing is cool. You have to pay attention to your customers and adapt to their needs.
  • The cloud is great. Outsourcing is great. Unreliable services aren’t. The bottom line is that no one cares about your data more than you do – there is no replacement for a robust due diligence process and robust thought about avoiding reliance on any one vendor.
  • Your heart doesn’t get satisfied with any levels of development. Ignore your heart. Listen to your brain.
  • You can always iterate and extrapolate later. Wet your feet asap.
  • As the product became more and more complex, the performance degraded. In my mind, speed is a feature for all web apps so this was unacceptable, especially since it was used to run live, public websites. We spent hundreds of hours trying to speed up the app with little success. This taught me that we needed to having benchmarking tools incorporated into the development cycle from the beginning due to the nature of our product.
  • It’s not about good ideas or bad ideas: it’s about ideas that make people talk. Make some aspect of your product easy and fun to talk about, and make it unique.
  • We really didn’t test the initial product enough. The team pulled the trigger on its initial launches without a significant beta period and without spending a lot of time running QA, scenario testing, task-based testing and the like. When v1.0 launched, glitches and bugs quickly began rearing their head (as they always do), making for delays and laggy user experiences aplenty — something we even mentioned in our early coverage.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • Our biggest self-realization was that we were not users of our own product. We didn’t obsess over it and we didn’t love it. We loved the idea of it. That hurt.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.
  • It may seem surprising that a seemingly successful product could fail, but it happens all the time. Although we arguably found product/market fit, we couldn’t quite crack the business side of things. Building any business is hard, but building a business with a single app offering and half of your runway is especially hard.

Buzzwords

The Innovation landscape is full of buzzwords, here are just a few you will need to know.

  • API – Application Program Interface is a method that uses web address ( http://www.server.com/api/important/action/) to accept requests and deliver results. Learn more about API’s here http://www.programmableweb.com/category/all/apis
  • AR – Augmented reality is where you use a screen on a mobile, tablet or PC to overlay 3D or geospatial information.
  • Big Data – Is about taking a wider view of your business data to find insights and to predict and improve products and services.
  • BYOD – Bring your own device.
  • BYOC – Bring your own cloud.
  • Caching – Using software to deliver data from memory rather than from slower database each time.
  • Cloud – Someone else’s computer that you run software or services on.
  • CouchDB – An Apache designed Key/Value NoSQL JSON store database that focuses on eventual replication.
  • DaaS – Desktop as a service
  • DbaaS – Database as a service (hardware and database software maintained by others but your data).
  • DBMS – Database Management System – the GUI
  • HPC – High-Performance Computing.
  • IaaS – Cloud-based Servers and infrastructure (Google Cloud, Amazon AWS, Digital Ocean and Vultr and Rackspace).
  • IDaaS – Third Party Authorisation management
  • IOPS – Operations per Second –What limitations are on the interface or software in question.
  • IoT – Internet of things are small devices that can display, sense or update information (internet-connected fridge or a button that orders more toilet paper.
  • iPaaS– integration Platform as a Service (software to integrate multiple XaaS)
  • JSON – A better CSV file (read more here)
  • MaaS – Monitoring as a Service (e.g Keymetrics.io)
  • CaaS – Communication as a service (e.g http://www.twillio.com)
  • Micro-services – an existing service that is managed by another vendor (e.g Notifications, login management, email or storage), usually charged by usage.
  • MongoDB – Another Key/Value NoSQL JSON Database that has upfront Replication
  • NoSQL – A No SQL database that stores data in JSON documents instead of normalised related tables.
  • PaaS – A larger stack of SaaS that you can customise from vendors Azure (Active Directory, Compute, Storage Blobs etc), AWS (SQS, RDS, Alasticache, Elastic File System, ), Google Cloud (Compute Engine, App Engine, Data-store ), Rackspace etc.
  • Rate Limiting – Ability to track and limit a user’s request to an API.
  • SaaS – A smaller software component that you can use or integrate (Google Apps, CiscoWebEx, GoTo
  • Scalable – the ability to have a website or service handle thousands to millions of hits and have baked in a way to handle exponential growth.
  • Meeting).
  • Scale Up – Increase the CPU speed and thus workload
  • Scale Out – Adding more servers and distributing the load instead of making servers faster.
  • SQL – A traditional relational database query language.
  • VR – Virtual Reality is where you totally immerse yourself in a 3D world with a head-mounted display.
  • XaaS – Anything as a service.

External or Online Advice

A consultant once joked to our team that their main job was to “Con” and “Insult” you ( CONinSULTant ).  Their main job is to promote what they know/sell and sow seeds of doubt about what you do. Having said that please take my advice with a grain of salt (I am just relaying what I know/prefer).

Consultants need to rapidly convert you to their way of thinking (and services), consultants gloss over what they don’t know and leave you down a happy part solution nirvana (often ignoring your legacy apps or processes, any roadblocks are relished as an opportunity for more money-making).  This is great if you have endless buckets of money and want to rewrite things over and over.

Having consultants design as develop a solution is not all bad but that would make his developer-focused blog post boring.

Microsoft IIS, Apache, NGINX, Lighthttpd are all good web servers but each has a different memory footprint, performance, and features when delivering static v dynamic content and each platform has maximum concurrent users that they can handle a second for a given server configuration.

You don’t need expensive solutions, read this blog post on “How I built an app with 500,000 users in 5 days on a $100 server”

Snip: I assume my apps will be successful. There’s no point in building an app assuming it won’t be successful. I would not be able to sleep if my app gains traction and then dies due to bad tech. I bake minimum viable scalability principles into my app. It’s the difference between happiness and total panic. It’s what I think should be part of an app MVP (Minimum Viable Product).

Blind googling to find the best platform can be misleading as it is hard to compare apples to apples. Take your time and write some code and evaluate for yourself.

  • This guide highly recommends Microsft.NET and IIS Web servers:  https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/
  • This guide says G-WAN, NGINX and Apache are good http://gwan.com/benchmark

Once you start worrying about scalability you start to plan for multiple servers, load balancing, replication and caching be prepared to open your wallet.

I prefer the free NGINX and if I need more grunt down the track I can move to the NGINX Plus as it has loads of advanced scalability and caching options  https://www.nginx.com/products/.
Alternatively, you can use XaaS for everything and have other people worry about the uptime/scaling and data storage but I find that it is inevitable you will need the flexibility of a self-managed server and FULL control of the core processes.

Golden rule = prove it is cheaper/faster/more reliable and don’t just trust someone. 

Common PaaS, SaaS and Self-Managed Server Vendors

Amazon AWS and Azure are the go to cloud vendors who offer robust and flexible offerings.

Azure: https://azure.microsoft.com/en-us/

Amazon AWS: https://aws.amazon.com/

Google cloud has many cloud offerings but product selection is hard. Prices are high and Google tend to kill off products that don’t make money (e.g Google Gears etc).

Google Cloud:

https://cloud.google.com/

Simple Self Managed Servers

If you want a server in the cloud on the cheap Linode and Digital Ocean have you covered.

  • Digital Ocean: http://www.digitalocean.com
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Linode: https://www.linode.com/

High-End Corporate vendors

  • Rackspace: https://www.rackspace.com/en-au/cloud
  • IBM Cloud: http://www.ibm.com/cloud-computing/au/#infrastructure

Other vendors

  • Engineyard: http://www.engineyard.com/
  • Heroku: https://www.heroku.com/
  • Cloud66: http://www.cloud66.com/
  • Parse: DEAD

Moving to Cloud Pro’s

  • Lowers Risk
  • Outsource talent
  • Scale to millions of users/hits
  • Pay for what you use
  • Granular access
  • Potential savings *
  • Lower risk *

Moving to Cloud Con’s

  • Usually billed in USD
  • Limited upload/downloads or API hits a day
  • Intentional tier pain points (Limited storage, hits, CPU, data transfers, Minimum servers).
  • Cheaper multi-tenant servers v expensive dedicated servers with dedicated support
  • Limited IOPS (g 30 API hits a second then $100 per additional 10 Req/sec)
  • XaaS Price changes
  • Not fully integrated (still need code)
  • Latency between Services.
  • Limited access for developers (not granular enough).
  • Security

Vendors can change their prices whenever they want, I had a cluster of MongoDB servers running on AWS (via http://www.mongodb.com/cloud/ ) and one day they said they needed to increase their prices because they underestimated the costs for the AWS servers. They gave me some credit but I was instantly paying more and was also tied to USD (not AUD). A fall in the Australian dollar will impact bills in a big way.

Vendor Uptime:

Not all vendors are stable, do your research on who are the most reliable: https://cloudharmony.com/status

Quick Status Pages of leading vendors.

  • AWS: https://status.aws.amazon.com/
  • Azure: https://azure.microsoft.com/en-us/status/
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Digital Ocean: https://status.digitalocean.com/
  • Google: https://status.cloud.google.com/
  • Heroku: https://status.heroku.com/
  • LiNode: https://status.linode.com/
  • Cloud66: http://status.cloud66.com/

Some vendors have patchy uptime

consultant_003

Management Software and Support:

Don’t lock in a vendor(s) until you have tested their services and management interfaces and can accurately forecast your future app costs.

I found that Digital Ocean was the simplest to get started, had capped prices and had the best documentation. However, Digital Ocean do not sell advanced services or advanced support and they did not have servers in Australia.

Google Cloud left a lot to be desired with product selection, setup and documentation. It did not take me long to realize I would be paying a lot more on Google platforms.

Azure was quite clean and crisp but lacked controls I was looking for. Azure is designed to be simple with a professional appearance (I found the default security was not high enough for me unmanaged Ubuntu Servers).  Azure was 4x the cost of Digital Ocean servers and 2x the cost of AWS.

AWS management interfaces were very confusing at first but support was not far away online.  AWS seemed to have the most accurate cost estimators and developer tools to make it my default choice.

Free Trials

When searching for a cloud provider to test look for free trials and have a play before you decide what is best.

https://aws.amazon.com/free/ – 12 Month free trial.

https://azure.microsoft.com/en-us/free/ –  $200 credit.

Digital Ocean 2 moths free for new customers.

Cloudant offered $50 free a month for a single multi-tenant NoSQL database but after as IBM acquisition, the costs seem steep (Financing is available through so it must be expensive). I walked away from IBM because it was going to cost me $4,000 a month for 1 dedicated Cloudant CouchDB Node.

Costs

It is hard to forecast your costs if you do not know what components you will use, what the CPU activity will be and what data will be delivered.

Google and AWS have a confusing mix of base rates, CPU credits, and data costs. You can boost your credits and usage but it will cost you compared to a flat rate server cost.

Digital Ocean and Linode offer great low rates for unmanaged servers and reasonable extra charges other vendors will scalp from the get go but lack the global presence.

Azure is a tad more expensive than AWS and a lot higher than Digital Ocean

At some point you need to spin up some servers and play around and if you need to change to another vendor.  I was tempted by IBM Cloud Ant CouchDB DBaaS but it would have been $4000 USD a month. (it did come with 24/7 techs that monitored the service for me).

Databases

Relational databases like MySQL and SQL Server are solid choices but replication can be tricky. See my guide here.

  • NoSQL database are easier to scale up and out but more care has to be given to the software controlling the data and collisions, Relational databases are harder to scale but are by designed to enforce referential integrity.

Design what you need and then chose a Relational, NoSQL or Mix of databases.  A good API will join a mix of databases but deliver the best of both worlds.

E.g Geographic data may best be served from MongoDB but related customer data from MySQL or MS SQL Server

Database cost will also impact your database decisions. E.g Why set up a SQL Server when a MySQL will do, why set up a Mongo DB cluster when a single MongoDB instance will do.

Also when you scale out the database capabilities vary.

  • Availability – Each client can always read and write data.
  • Consistency – All clients have the same view of the data
  • Partition Tolerance – The System works well despite physical network partitions.

nosql-triangle

Database decisions will impact the code and complexity of your application.

Website and API Endpoint

The website will be the glue that sticks all the pieces together.  An API on a web server ( e.g  https://www.myserver.com/api/v1/do/domething/important ) may trigger these actions.

  1. Check the request origin (Ip ban) – Check IP cache or request new IP lookup
  2. Validate SSL status.
  3. Check the users login tokens (are they logged in) – log output
  4. Check a database (MYSQL)
  5. Check for permissions – is this action allowed to happen?
  6. Check for rate-limiting – had a threshold been exceeded.
  7. Check another database (MongoDB)
  8. Prepare data
  9. Resolve the API request – return the data.

A Web server then becomes very important as it is managing lot. If you decided to use a remote “as a service”  ID management API or application endpoint would each of the steps happen in a reasonable time-frame.  StormPath may be great service for IP auth but I had issues with reliability early on and costs were unpredictable, Google firebase is great at Application endpoints but they can be expensive.

Carefully evaluate the pro’s and cons of going DIY/self-managed versus a mix of “as a service” and full “as a service”.

I find that NGINX and NodeJS is the perfect balance between cost, flexibility, and scalability and risk [ link to my scalability guide ] NodeJS is great for integrating MySQL, API or MongoDB calls into the back end in a non-blocking way.  NodeJS can easily integrate caching and connection pooling to enhance throughput.

Mulesoft is a good (but expensive) API development suite https://www.mulesoft.com/platform/api

Location, Latency and Local Networks.

You will want to try and keep all of your servers and services as close as possible, don’t spin up a digital ocean server in Singapore if your customers are in Australia (the NETFLIX effect will see latency drop off a cliff at night). Also having a database on one vendor and a web server on another vendor may add extra latency issues, try and use the same vendor in the same data centre.

Don’t forget SSL will add about 40ms to any request locally (and up to 200ms for overseas serves), that does impact maximum concurrent users (but you need strong SSL).

  • Application scalability on a budget (my journey)
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
  • The quickest way to setup a scalable development ide and web server
  • More here: https://fearby.com/

Also, remember the servers may have performance limitations (maximum IOPS ) sometimes you need to pay for higher IOPS or performance to get better throughput.

Security

Ensure that everything is secure, logged and you have some sort of IP banning or rate-limiting and session tokens/expiry and or auto log out.

Your servers need to be patched and potential exploits monitored, don’t delay updating software like MySQL and OpenSSL when exploits are known.

Consider getting advice from a company like https://www.whitehack.com.au/ where they can review your code and perform penetration testing.

  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Update OpenSSL on a Digital Ocean VM
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

You may want to limit the work you do on authorization management and get a third party to do https://www.okta.com/ or http://www.stormpath.com can help here.

You will certainly need to implement two-factor authentication,  OAuth 2, session tokens, forward security, rate limiting, IP logging, polymorphic data return via API.  Security is a big one.

Here is a benchmark for an API hit overseas with and without SSL

consultant_002-1

Moving my Digital Ocean Server from Singapore to AWS in Australia dropped my API requests to under 200ms (SSL, complete authorization, logging and payload delivery).

Monitoring and Benchmarking

Monitoring your website’s health (CPU, RAM and Memory) along with software and database monitoring is very important to maintain a service.

https://keymetrics.io/ is a great NodeJS service and API monitoring application.

consultant_004

PM2 is a great node module that integrated Key metrics with NodeJS.

CPU BusySiege is a good command-line benchmark took, check out my guide here.

http://www.loader.io is a great service for hitting your website from across the world.

AWS MongoDB Test

End to End Analytics

You should be capturing analytics from end to end (failed logins, invalid packets, user usage etc).  Caching content and blocking bad uses can them be implemented to improve performance.

Developer access

All platforms have varied access to allow developers in to change things.  I prefer the awesome http://www.c9.io for connecting to my servers.

C9 IDE

If you go with high-level SaaS (Microsft CRM, Sitecore CRM etc) you may be locked into outdated software that is hard for developers to modify and support.

Don’t forget your customers.

At this point, you will have a million thoughts on possible solutions and problems but don’t forget to concentrate on what you are developing and it is viable.  Do you have validated customer needs and will you be working to solve those problems?

Project Pre-Mortem

Don’t be afraid to research what could go wrong, are you about to spend money on adding another layer of software to improve something but not solve the problem at hand?

It is a good idea to quickly guess what could go wrong before deciding on a way forward.

  • Server scalability
  • Features not polished
  • Does not meet customer needs
  • Monetization Issues
  • Unknown usage costs
  • Bad advice from consultants
  • Vendors collapsing or being bought out.

Long game

Make sure you choose a vendor that won’t go broke?  Smaller vendors like Parse were gobbled up by Facebook and Facebook closed their doors leaving customers in the lurch.  Even C9.io has been purchased by AWS and their future is uncertain.  Will Linode and Digital Ocean be able to compete against AWS and Azure? Don’t lock yourself into one solution and always have a backup plan.

Do

  • Do know what your goal is.
  • Make a start.
  • Iterate in public.
  • Test everything.

Don’t

  • Don’t trust what you have been told.
  • Don’t develop without a goal.
  • Don’t be attracted to buzzwords, new tech and shiny objects.

Good luck and happy coding.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Edit v1.11

Filed Under: Backup, Business, Cloud, Development, Hosting, Linux, MySQL, NodeJS, Scalability, Scalable, Security, ssl, Uncategorized Tagged With: digital disruption, Innovation

Creating and configuring a CentOS server on Digital Ocean

December 12, 2016 by Simon

Normally I prefer Ubuntu servers but I needed a CentOS Server ( info ) for some software, this guide is a quick guide that lists anything that is different to setting up an Ubuntu server.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Creating a CentOS Server

If you get stuck check out my Ubuntu Digital Ocean guide here or my Ubuntu on AWS guide here.

Login to Digital Ocean (get 2-month free by using my referral link here).

Update 2018: For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Create and deploy a CentOS 7.2 64 bit Server at Digital Ocean.

Reset your root password (so we can log-in via telnet and install node-js before we connect to C9.io).

Log-in to the server with ssh

Setting the timezone is different than Ubuntu.

# Use this command to find your nearest Timezone
timedatectl list-timezones | grep Sydney
> Australia/Sydney

# Set the timezine by running this command
sudo timedatectl set-timezone Australia/Sydney

date

Setup NPT time sync daemon

sudo yum install ntp
sudo systemctl start ntpd
sudo systemctl enable ntpd

Setup a Swap file

sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=xxxxxxx-xxxx-4axxxx63-xxxx-xxxxxxxxxxx
sudo swapon /swapfile
sudo sh -c 'echo "/swapfile none swap sw 0 0" >> /etc/fstab'

Update Packages

yum update && yum upgrade

Install Common Packages

yum install gcc
gcc -version

yum install wget
yum install p7zip

yum install java
java -version

Monitor Open Ports:

yum install nmap

# List open ports
nmap 127.0.01

More info on ports and firewalls here.

Install the Links text based browser

yum install links

links www.google.com

Install Apache

yum install httpd

Install the nano text editor

yum install nano

Configure Apache to listen on port 80: Edit /etc/httpd/conf/httpd.conf and add LISTEN 80

listen 80

Note the location of your http docs root folder (DocumentRoot: “/var/www/html“).

Firewall configuration (more info here)

# enable the firewall 
systemctl enable firewalld
systemctl start firewalld

#firewall status
systemctl status firewalld

#Configure the firewall
firewall-cmd --add-service=http

# Start at system boot
sudo systemctl start firewalld


# firewall status 
systemctl status firewalld

Buy a domain from NameCheap then go to Digital Ocean droplet list and assign the domain name to the droplet (see the guide here).

Summary:

A @ %YOUR_DROPLET_IPV4_ADDRESS%
A www %YOUR_DROPLET_IPV4_ADDRESS%
AAAA @ A @ %YOUR_DROPLET_IPV6_ADDRESS%
CNAME * www.yournewdomainname.com
NS ns1.digitalocean.com
NS ns2.digitalocean.com.
NS ns3.digitalocean.com.

Goto Namecheap and point the domain to custom DNS ( ns1.digitalocean.com,  ns3.digitalocean.com and ns3.digitalocean.com ), remove the placeholder domain just provisioned warning,

On your server (SSH) run the following command

hostnamectl set-hostname --static www.yourdomainname.com

Configure centos Apache to redirect www.yourdomain.com to yourdomain.com (guide here).

Wait or your domain to be available (can you access it via www.yourdomainname.com and yourdomainname.com).

I did add these lines to the following files (/etc/cloud/templates/hosts.redhat.tmpl ) as my www. was not loading (I am not sure if it helped).

# ipv4 config area
127.0.0.1 www.mydomainname.com www

# ipv6 config area
::1 www.mydomainname.com www

Then restarted the network.

service network restart
sudo systemctl status network.service
sudo systemctl restart httpd

I then write an .htaccess redirect to sent all requests to www.mydomainname.com

cd /var/www/html
sudo nano .htaccess

Configure Apache Root Folder

# Redirect www to non-www
# RewriteEngine On
# RewriteBase /
# RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
# RewriteRule ^(.*)$ http://%1/$1 [R=301,L]

# Redirect non-www to www
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]

Now I can view my website 🙂

Time to create a default page.

echo "Hello World" > /var/www/html/index.html

Installing PHP

yum install php
systemctl restart httpd

Install MariaDB

yum install mariadb-server mariadb
systemctl start mariadb.service
systemctl enable mariadb.service
firewall-cmd --add-service=mysql

Configure MariaDB

/usr/bin/mysql_secure_installation

Install an FTP Server

yum install vsftpd

Todo: Add local user…

Install nodejs

yum install -y gcc-c++ make
curl -sL https://rpm.nodesource.com/setup_6.x | sudo -E bash -
yum instal nodejs

Connect the server to c9.io

Now the server is ready to connect to c9.io.

Security

As a precaution do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.

Firewall was blocking connections after reboot

firewall-cmd --add-service=http --zone=public --permanent

Install SSL

todo…

Lock down the server (remove the root account etc)

Todo: Remove root logins, secure mariadb etc

todo..

Thanks to the following guides that helped me:

http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/2/

https://www.digitalocean.com/community/tutorials/initial-server-setup-with-centos-7

https://www.digitalocean.com/community/tutorials/additional-recommended-steps-for-new-centos-7-servers

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Domain, Hosting, Scalability, Security, ssl

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT