• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

nginx

HomePi – Raspberry PI powered touch screen showing information from house-wide sensors

March 14, 2022 by Simon

This post is a work in progress (14/3/2022, v0.9.63 – PCB’s v0.2 Designed and Ordered

Summary

After watching this video from Jeff Geerling (demonstrating how to build a Air Quality Sensor) I have decided to make 2. but why not build something bigger?

I want to make a RaspBerry Pi server with a touch screen to receive data from a dozen other WeMos Sensors that I will build.

The Plan

Below is a rough plan of what I am building

In a nutshell, it will be 20x WeMos Sensors recording

Picture of20x WeMoss Sensors, weather station and co2 sensors talking to an api that saves to MySQL then mysql being ready buy a webpage and touch screen panel

I ordered all the parts from Amazon, BangGood, AliExpress, eBay, Core Electronics and Kogan.

Fresh Bullseye Install (Buster upgrade failed)

On 21/11/2021 I tried to manually update Buster to Bullseye (without taking a backup first (bad idea)). I followed this guide to reinstall Rasbian from scratch (this with Bullseye)

Storage Type

Before I begin I need to decide on what storage media to use on the Raspberry Pi. I hate how unreliable and slow MicroSD cards. I tried using an old 128GB SATA SSD, a 1TB Magnetic Hard Drive, a SATA M.2 SSD and NVME M.2 in a USB caddy.

I decided to use a spare 250GB SATA based M.2 Solid State from my son’s PC in Geekworm X862 SATA M.21 Expansion board.

With this board I can bolt the M.2 Solid State Drive into a expansion board under the pi and Power it from the RaspBerry Pi USB Port.

Nice and tidy

I zip-tied a fan to the side of the boards to add a little extra airflow over the solid state drive

32Bit, 64Bit, Linux or Windows

Before I begin I set up Raspbian on an empty Micro SD card (just to boot it up and flash the firmware to the latest version). This is very easy and documented elsewhere. I needed the latest firmware to ensure boort from USB Drive (not Micro SD card was working).

I ran rpi-update and flashed the latest firmware onto my Raspberry Pi. Good, write up here.

When my Raspberry Pi had the latest firmware I used the Raspberry Pi Imager to install the 32 Bit Raspberry Pi OS.

I do have a 8GB Raspberry Pi 4 B, 64Bit Operating Systems do exist but I stuck with 32 bit for compatibility.

Ubuntu 64bit for Raspberry Pi Links

  • Install Ubuntu on a Raspberry Pi | Ubuntu
    • Server Setup Links
      • How to install Ubuntu Server on your Raspberry Pi | Ubuntu
    • Desktop Setup Links
      • How to install Ubuntu Desktop on Raspberry Pi 4 | Ubuntu

Windows 10 for Raspberry Pi Links
https://docs.microsoft.com/en-us/windows/iot-core/tutorials/quickstarter/prototypeboards
https://docs.microsoft.com/en-us/answers/questions/492917/how-to-install-windows-10-iot-core-on-raspberry-pi.html
https://docs.microsoft.com/en-us/windows/iot/iot-enterprise/getting_started

Windows 11 for Raspberry Pi Links
https://www.youtube.com/user/leepspvideo
https://www.youtube.com/watch?v=WqFr56oohCE
https://www.worproject.ml

Setting up the Raspberry Pi Server

These are the steps I used to setup my Pi

Dedicated IP

Before I began I ran ifconfig on my Pi to obtain my Raspberry Pi’s wireless cards mac address. I logged into my Router and setup a dedicated IP (192.168.0.50), this way I can have a IP address thta remains the same.

Hostname

I set my hostname here

sudo nano /etc/hosts
sudo nano /etc/hostname

I verified my hostname with this command

hostname

I verified my IP address with this command

hostname -I

Samba Share

I setup the Samba service to allow me to copy files to and from the Pi

sudo apt-get install samba samba-common-bin
sudo apt-get update

I made a folder to share files

 mkdir ~/share

I edited the Samba config file

sudo nano /etc/samba/smb.conf

In the config file I set my workgroup settings


workgroup = Hyrule
wins support = yes

I defined a share at the bottom of the config file (and saved)

[PiShare]
comment=Raspberry Pi Share
path=/home/pi/share
browseable=Yes
writeable=Yes
only guest=no
create mask=0777
directory mask=0777
public=no

I set a smb password

sudo smbpasswd -a pi
New SMB password: ********
Retype new SMB password: ********

I tested the share froma Windows PC

And the share is accessible on the Raspberry Pi

Great, now I can share files with drag and drop (instead of via SCP)

Mono

I know how to code C# Windows Executables, I have 25 years experince. I do nt want to learn Java or Python to code a GUI application for a touch screen if possible.

I setup Mono from Home | Mono (mono-project.com) to be anbe to run Windows C# EXE’s on Rasbian

sudo apt install apt-transport-https dirmngr gnupg ca-certificates

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb https://download.mono-project.com/repo/debian stable-raspbianbuster main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list

sudo apt update

sudo apt install mono-devel

I copied an EXE I wrote in C# on Windows and ran it with Mono

sudo mono ~/HelloWorld.exe
Exe Test OK

This worked.

Nginx Web Server

I Installed NginX and configured it

sudo apt-get install nginx

I created a /www folder for nginx

sudo mkdir /www

I created a place-holder file in the www root

sudo nano /wwww/index.html

I set permissions to allow Nginx to access /www

sudo chown -R www-data:www-data /www

I edited the NginX config as required

sudo nano /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/nginx.conf 

I tested and reloaded the nginx config


sudo nginx -t
sudo nginx -s reload
sudo systemctl start nginx

I started NginX

sudo systemctl start nginx

I tested nginx in a web browser

NodeJS/NPM

I installed NodeJS

sudo apt update
sudo apt install nodejs npm -y

I verified Node was installed

nodejs --version
> v12.22.5

PHP

I installed PHP

sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update

sudo apt install -y php8.0-common php8.0-cli php8.0-xml

I verified PHP with this command

php --version

> PHP 8.0.13 (cli) (built: Nov 19 2021 06:40:53) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.13, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.13, Copyright (c), by Zend Technologies

I installed PHP-FPM

sudo apt-get install php8.0-fpm

I verified the PHP FPM sock was available before adding it to the NGINX Config

sudo ls /var/run/php*/**.sock
> /var/run/php/php8.0-fpm.sock  /var/run/php/php-fpm.sock

I reviewed PHP Settings

sudo nano /etc/php/8.0/cli/php.ini 
sudo nano /etc/php/8.0/fpm/php.ini

I created a /www/ppp.php file with this contents

<?php
  phpinfo(); // all info
  // module info, phpinfo(8) identical
  phpinfo(INFO_MODULES);
?>

PHP is working

PHP Test OK

I changed these php.ini settings (fine for local development).

max_input_vars = 1000
memory_limit = 1024M
max_file_uploads = 20M
post_max_size = 20M
display_errors = on

MySQL Database

I installed MariaDB

sudo apt install mariadb-server

I updated my pi password

passwd

I ran the Secure MariaDB Program

sudo mysql_secure_installation

After setting each setting I want to run mysql as root to test mysql

PHPMyAdmin

I installed phpmyadmin to be able to edit mysql databases via the web

I followed this guide to setup phpmyadmin via lighthttp and then via nginx

I then logged into MySQL, set user permissions, create a test database and changes settings as required.

NginX to NodeJS API Proxy

I edited my NginX config to create a NodeAPI Proxy

Test Webpage/API

Todo

I installed PM2 the NodeJS agent software

sudo npm install -g pm2 

Node apps can be started as a service from cli

pm2 start api_v1.js

PM2 status

pm2 status

You can delete node apps from PM2 (if desired)

pm2 delete api_v1.js

Sending Email from CLI

I setup send email to allow emails to be sent from the cli with these commands

 sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail  

I logged into my GSuite account and setup an alias and app password to use.

Now I can send emails from the CLI

sudo sendemail -f [email protected] -t [email protected] -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp **************

I added this to a Bash script (“/Scripts/Up.sh”) and added an event to send an email every 6 hours

7 Inch Full View LCD IPS Touch Screen 1024*600 

I purchased a 7″ Touch screen from Banggood. I got a head up from the following Video.

I plugged in the touch USB cable to my Pi’s USB3 port. I pliugged the HDMI adapter into the screen and the pi (with the supplied mini plug).

I turned on the pi and it work’s and looks amazing.

This is displaying a demo C# app I wrote. It’s running via mono.

I did have to add the following to config.txt to bet native resolution. The manual on the supplied CD was helpful (but I did not check it at first).

max_usb_current=1
hdmi_force_hotplug=1
config_hdmi_boost=7
hdmi_group=2
hdmi_mode=1
hdmi_mode=87 
hdmi_drive=1
display_rotate=0
hdmi_cvt 1024 600 60 6 0 0 0
framebuffer_width=1024
framebuffer_height=600

PiJuice UPS HAT

I purchased an external LiPi UPS to keep the raspberry pi fed with power (even when the power goes out)

The stock battery was not charged and was quite weak when I first installed it. Do fully charge the battery before testing.

PiJuice

Stock Battery = 3.7V @ 1820mAh

Stock Battery = 3.7V @ 1820mAh

Below are screenshots so the PIJuice Setup.

PiJuice HAT Settings

PiJuice General Settings

General Settings

There is an option to set events for every button

Extensive screen to set button events

LED Status color and function

Set LED status and color

IO for the PiJuice Input. I will sort this out later.

PiJuice IO settings

A new firmware was available. I had v1.4

Update firmware screen

I updated the firmware

Firmware update worked

Firmware flash success

Battery settings

Battery Settings

PiJuice Button Config

Button config

Wake Up Alarm time and RTC

Clock Settings

System Settings

System Settings

System Events

system settings page

User Scripts

Define user scripts

I ordered a bigger battery as my Screen, M.2, Fan and UPS consume near the maximum of the stock battery.

10,000mAh battery

After talking with the seller of the battery they advised I setup the 10,000mAh battery using the 1,000mAh battery setup in PiJuice but change the Capacity and Charge Current

  • Capacity = 10000C
  •  cutoff voltage

And for battery longevity set the 

  • Cutoff voltage: 3,250mv

Final Battery Setup

Battery settings based off 1000mAh battery profile , Capacity 10,000 mAh, Charge current 850 and Cutoff 3250mV

WeMos Setup

I orderd 20x Wemos Mini D1 Pro (16Mbit) clones to use to run the sensors. I soldered the legs on in batches of 8

WeMos installed on breadboards ready to solder pins

Soldering was not perfect but worked

20x soldered wemos

Soldering is not perfect but each joint was triple tested.

Close up of soldered joints

I only had one dead WeMos.

I will set up the final units on ProtoBoards.

Protoboard

20x Wemos ready for service and the external aerial is glued down. The hot glue was a bad idea, I had to rotate a resistor under the hot glue.

20x wemos ready.

Revision

I ended up reordering the WeMos Mini’s and soldering on Female headers so I can add OLED screens

air mon enclosure

I added female headers to allow an OLED screen

new wemos

I purchased a microscope tpo be able to see better.

microscope

Each sensor will have a mini OLED screen.

mini oled screen

0.66″ OLED Screens

oled screen

I designed a PCB in Photoshop and had it turned into a PCB via https://www.fiverr.com/syedzamin12. I ordered 30x bloards from https://jlcpcb.com/

Custom PCB

The PCB’s fit inside the new enclosure perfectly

I am waiting for smaller screws to arrive.

PCB v0.2

I decided to design a board with 2 switches (and a light sensor to turn the screen off at night)

Breadboard Prototype

Prototype

I spoke to https://www.fiverr.com/syedzamin12 and withing 24 hours a PCB was designed

I Layers

This time I will get a purple PCB from JLCPCB and add a dinosaur for my son

Top PCB View

TOP PCB View

Back PCB View

Back PCB View

3D PC View

3D PCB view

JLCPCB made the board in 3 days

3 days

Now I need to wait a few weeks for the new PCB to arrive

Also, I finsihed the firmware for v0.2 PCB

I ordered some switches

I also ordered some reset buttons

I might add a larger 0.96″ OLED screen

Wifi and Static IP Test

I uploaded a skepch to each WeMos and tested the Wifi and Static IP thta was given.

Sketch

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>


#define SERVER_IP "192.168.0.50"

#ifndef STASSID
#define STASSID "wifi_ssid_name"
#define STAPSK  "************"
#endif

void setup() {

  Serial.begin(115200);

  Serial.println();
  Serial.println();
  Serial.println();

  WiFi.begin(STASSID, STAPSK);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected! IP address: ");
  Serial.println(WiFi.localIP());

}

void loop() {
  // wait for WiFi connection
  if ((WiFi.status() == WL_CONNECTED)) {

    WiFiClient client;
    HTTPClient http;

    Serial.print("[HTTP] begin...\n");
    // configure traged server and url
    http.begin(client, "http://" SERVER_IP "/api/v1/test"); //HTTP
    http.addHeader("Content-Type", "application/json");

    Serial.print("[HTTP] POST...\n");
    // start connection and send HTTP header and body
    int httpCode = http.POST("{\"hello\":\"world\"}");

    // httpCode will be negative on error
    if (httpCode > 0) {
      // HTTP header has been send and Server response header has been handled
      Serial.printf("[HTTP] POST... code: %d\n", httpCode);

      // file found at server
      if (httpCode == HTTP_CODE_OK) {
        const String& payload = http.getString();
        Serial.println("received payload:\n<<");
        Serial.println(payload);
        Serial.println(">>");
      }
    } else {
      Serial.printf("[HTTP] POST... failed, error: %s\n", http.errorToString(httpCode).c_str());
    }

    http.end();
  }

  delay(1000);
}

The Wemos booted, connected to WiFi, set and IP, and tried to post a request to a URL.

........................................................
Connected! IP address: 192.168.0.51
[HTTP] begin...
[HTTP] POST...
[HTTP] POST... failed, error: connection failed

The POST failed because my PI API Server was off.

Touch Screen Enclosure

I constructed a basic enclosure and screwed the touch screen to it. I need to find  aflexible black scrip to put around the screen and cover up the gaps.

Wooden box with the screen in it

The touch screen has been screwed in.

Screen screwed in

Over the Air Updating

I followed this guide and having the WeMos updatable over WiFi.

Basically, I installed the libraries “AsyncHTTPSRequest_Generic”, “AsyncElegantOTA”, “AsyncHTTPRequest_Generic”, “ESPAsyncTCP” and “ESPAsyncWebServer”.

Manage Libraries

A few libraries would not download so I manually downloaded the code from the GitHub repository from Confirm your account recovery settings (github.com) and then extracted them to my Documents\Arduino\libraries folder.

I then opened the exampel project “AsyncElegantOTA\ESP8266_Async_Demo”

I reviewed the code

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>

const char* ssid = "........";
const char* password = "........";

AsyncWebServer server(80);


void setup(void) {
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request) {
    request->send(200, "text/plain", "Hi! I am ESP8266.");
  });

  AsyncElegantOTA.begin(&server);    // Start ElegantOTA
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();
}
I added my Wifi SSID and password, saved the project and compiled a the code and wrote it to my WeMos Mini D1

I added LED Blink Code

void setup(void) {
  ...
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  ...
}
void loop(void) {
 ...
  delay(1000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(1000);                      // Wait for two seconds (to demonstrate the active low LED)
 ...
}

I compiled and tested the code

Now to get new code changes to the WeMos Mini via a binary, I edited the code (chnaged the LED blink speed) and clicked “Export Compiled Binary”

Compole Binary

When the binary compiled I opened the Sketch Folder

Show Sketch folder

I could see a bin file.

Bin File

I loaded the http://192.168.0.51/update and selected the bin file.

The new firmwaere applied.

Flashing

I navighated back to http://192.168.0.51

TIP: Ensure you add the starter sketch that has your wifi details in there.

Password Protection

I changed the code to add a basic passeord on access ad on OTA update

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>


//Saved Wifi Credentials (Research Encruption Later or store in FRAM Module?
const char* ssid = "your-wifi-ssid";
const char* password = "********";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(80);

void setup(void) {
  Serial.begin(115200);       //Serial Mode (Debug)
    
  WiFi.mode(WIFI_STA);        //Client Mode
  WiFi.begin(ssid, password); //Connect to Wifi
 
  Serial.println("");

  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success! ESP8266 #001.");
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);

  
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();

  digitalWrite(LED_BUILTIN, LOW);
  delay(8000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(8000);                      // Wait for two seconds (to demonstrate the active low LED)

}

Password prompt for users accessing the device.

Login scree

Password prompt for admin users accessing the device.

admin password protect

Later I will research encrypting the password and storing it on SPIFFS partition or a FRAM memory module.

Adding the DHT22 Sensors

I received my paxckl of DHT22 Sensors (AMT2302).

Specifications

  • Operating Voltage: 3.5V to 5.5V
  • Operating current: 0.3mA (measuring) 60uA (standby)
  • Output: Serial data
  • Temperature Range: 0°C to 50°C
  • Humidity Range: 20% to 90%
  • Resolution: Temperature and Humidity both are 16-bit
  • Accuracy: ±1°C and ±1%

I wired it up based on this Adafruit post.

DHT22 Wired Up on a breadboard.

DHT22 and Basic API Working

I will not bore you with hours or coding and debugging so here is my code thta

  • Allows the WeMos D1 Mini Prpo (ESP8266) to connect to WiFi
  • Web Server (with stats)
  • Admin page for OTA updates
  • Password Prpotects the main web folder and OTA admin page
  • Reading DHT Sensor values
  • Debug to serial Toggle
  • LED activity Toggle
  • Json Serialization
  • POST DHT22 data to an API on the Raspberry PI
  • Placeholder for API return values
  • Automatically posts data to the API ever 10 seconds
  • etc

Here is the work in progress ESP8288 Code

#include <ESP8266WiFi.h>        // https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/src/ESP8266WiFi.h
#include <ESPAsyncTCP.h>        // https://github.com/me-no-dev/ESPAsyncTCP
#include <ESPAsyncWebServer.h>  // https://github.com/me-no-dev/ESPAsyncWebServer
#include <AsyncElegantOTA.h>    // https://github.com/ayushsharma82/AsyncElegantOTA
#include <ArduinoJson.h>        // https://github.com/bblanchon/ArduinoJson
#include "DHT.h"                // https://github.com/adafruit/DHT-sensor-library
                                // Written by ladyada, public domain

//Todo: Add Authentication
//Fyi: https://api.gov.au/standards/national_api_standards/index.html

#include <ESP8266HTTPClient.h>  //POST Client

//Firmware Stats
bool bDEBUG = true;        //true = debug to Serial output
                           //false = no serial output
//Port Number for the Web Server
int WEB_PORT_NUMBER = 1337; 

//Post Sensor Data Delay
int POST_DATA_DELAY = 10000; 

bool bLEDS = true;         //true = Flash LED
                           //false =   NO LED's
//Device Variables
String sDeviceName = "ESP-002";
String sFirmwareVersion = "v0.1.0";
String sFirmwareDate = "27/10/2021 23:00";

String POST_SERVER_IP = "192.168.0.50";
String POST_SERVER_PORT = "";
String POST_ENDPOINT = "/api/v1/test";

//Saved Wifi Credentials (Research Encryption later and store in FRAM Module?
const char* ssid = "your_wifi_ssid";
const char* password = "***************";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(WEB_PORT_NUMBER);    //Feel free to chnage the port number

//DHT22 Temp Sensor
#define DHTTYPE DHT22   // DHT 22  (AM2302), AM2321
#define DHTPIN 5
DHT dht(DHTPIN, DHTTYPE);

//Common Variables
String thisBoard = ARDUINO_BOARD;
String sHumidity = "";
String sTempC = "";
String sTempF = "";
String sJSON = "{ }";

//DHT Variables
float h;
float t;
float f;
float hif;
float hic;


void setup(void) {

  //Turn On PIN
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  
  //Serial Mode (Debug)
  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  if (bDEBUG) Serial.begin(115200);
  if (bDEBUG) Serial.println("Serial Begin");

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }
  if (bDEBUG) Serial.println("Wifi Setup");
  if (bDEBUG) Serial.println(" - Client Mode");
  
  WiFi.mode(WIFI_STA);        //Client Mode
  
  if (bDEBUG) Serial.print(" - Connecting to Wifi: " + String(ssid));
  WiFi.begin(ssid, password); //Connect to Wifi
 
  if (bDEBUG) Serial.println("");
  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    if (bDEBUG) Serial.print(".");
  }
  if (bDEBUG) Serial.println("");
  if (bDEBUG) Serial.print("- Connected to ");
  if (bDEBUG) Serial.println(ssid);
  
  if (bDEBUG) Serial.print("IP address: ");
  if (bDEBUG) Serial.println(WiFi.localIP());

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  
  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    
        String sendHtml = "";
        sendHtml = sendHtml + "<html>\n";
        sendHtml = sendHtml + " <head>\n";
        sendHtml = sendHtml + " <title>ESP# 002</title>\n";
        sendHtml = sendHtml + " <meta http-equiv=\"refresh\" content=\"5\";>\n";
        sendHtml = sendHtml + " </head>\n";
        sendHtml = sendHtml + " <body>\n";
        sendHtml = sendHtml + " <h1>ESP# 002</h1>\n";
        sendHtml = sendHtml + " <u2>Debug</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Device Name: " + sDeviceName + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Version: " + sFirmwareVersion + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Date: " + sFirmwareDate + " </li>\n";
        sendHtml = sendHtml + " <li>Board: " + thisBoard + " </li>\n";
        sendHtml = sendHtml + " <li>Auto Refresh Root: On </li>\n";
        sendHtml = sendHtml + " <li>Web Port Number: " + String(WEB_PORT_NUMBER) +" </li>\n";
        sendHtml = sendHtml + " <li>Serial Debug: " + String(bDEBUG) +" </li>\n";
        sendHtml = sendHtml + " <li>Flash LED's Debug: " + String(bLEDS) +" </li>\n";
        sendHtml = sendHtml + " <li>SSID: " + String(ssid) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT TYPE: " + String(DHTTYPE) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT PIN: " + String(DHTPIN) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_DATA_DELAY: " + String(POST_DATA_DELAY) +" </li>\n";

        sendHtml = sendHtml + " <li>POST_SERVER_IP: " + String(POST_SERVER_IP) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_ENDPOINT: " + String(POST_ENDPOINT) +" </li>\n";
        
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>Sensor</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Humidity: " + sHumidity + "% </li>\n";
        sendHtml = sendHtml + " <li>Temp: " + sTempC + "c, " + sTempF + "f. </li>\n";
        sendHtml = sendHtml + " <li>Heat Index: " + String(hic) + "c, " + String(hif) + "f.</li>\n";
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>JSON</h2>";
        
        // Allocate the JSON document Object/Memory
        // Use https://arduinojson.org/v6/assistant to compute the capacity.
        StaticJsonDocument<250> doc;
        //JSON Values     
        doc["Name"] = sDeviceName;
        doc["humidity"] = sHumidity;
        doc["tempc"] = sTempC;
        doc["tempf"] = sTempF;
        doc["heatc"] = String(hic);
        doc["heatf"] = String(hif);
        
        sJSON = "";
        serializeJson(doc, sJSON);
        
        sendHtml = sendHtml + " <ul>" + sJSON + "</ul>\n";
        
        sendHtml = sendHtml + " <u2>Seed</h2>";
        long randNumber = random(100000, 1000000);
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <p>" + String(randNumber) + "</p>\n";
        sendHtml = sendHtml + " </ul>\n";
       
        sendHtml = sendHtml + " </body>\n";
        sendHtml = sendHtml + "</html>\n";
        //Send the HTML   
        request->send(200, "text/html", sendHtml);
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);
  
  server.begin();
  if (bDEBUG) Serial.println("HTTP server started");
 
  if (bDEBUG) Serial.println("Board: " + thisBoard);

  //Setup the DHT22 Object
  dht.begin();
  
}

void loop(void) {

  AsyncElegantOTA.loop();

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }


  //Display Temp and Humidity Data

  h = dht.readHumidity();
  t = dht.readTemperature();
  f = dht.readTemperature(true);

  // Check if any reads failed and exit early (to try again).
  if (isnan(h) || isnan(t) || isnan(f)) {
    if (bDEBUG) Serial.println(F("Failed to read from DHT sensor!"));
    return;
  }
  
  hif = dht.computeHeatIndex(f, h);         // Compute heat index in Fahrenheit (the default)
  hic = dht.computeHeatIndex(t, h, false);  // Compute heat index in Celsius (isFahreheit = false)

  if (bDEBUG) Serial.print(F("Humidity: "));
  if (bDEBUG) Serial.print(h);
  if (bDEBUG) Serial.print(F("%  Temperature: "));
  if (bDEBUG) Serial.print(t);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(f);
  if (bDEBUG) Serial.print(F("°F  Heat index: "));
  if (bDEBUG) Serial.print(hic);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(hif);
  if (bDEBUG) Serial.println(F("°F"));

  //Save for Page Load
  sHumidity = String(h,2);
  sTempC = String(t,2);
  sTempF = String(f,2);

  //Post to Pi API
    // Allocate the JSON document Object/Memory
    // Use https://arduinojson.org/v6/assistant to compute the capacity.
    StaticJsonDocument<250> doc;
    //JSON Values     
    doc["Name"] = sDeviceName;
    doc["humidity"] = sHumidity;
    doc["tempc"] = sTempC;
    doc["tempf"] = sTempF;
    doc["heatc"] = String(hic);
    doc["heatf"] = String(hif);
    
    sJSON = "";
    serializeJson(doc, sJSON);

    //Post to API
    if (bDEBUG) Serial.println(" -> POST TO API: " + sJSON);

   //Test POST
  
    if ((WiFi.status() == WL_CONNECTED)) {
  
      WiFiClient client;
      HTTPClient http;
  
    
      if (bDEBUG) Serial.println(" -> API Endpoint: http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT);
      http.begin(client, "http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT); //HTTP


      if (bDEBUG) Serial.println(" -> addHeader: \"Content-Type\", \"application/json\"");
      http.addHeader("Content-Type", "application/json");
  
      // start connection and send HTTP header and body
      int httpCode = http.POST(sJSON);
      if (bDEBUG) Serial.print("  -> Posted JSON: " + sJSON);
  
      // httpCode will be negative on error
      if (httpCode > 0) {
        // HTTP header has been send and Server response header has been handled

  
        //See https://api.gov.au/standards/national_api_standards/api-response.html 
        // Response from Server
        if (bDEBUG) Serial.println("  <- Return Code: " + httpCode);
                
        //Get the Payload
        const String& payload = http.getString();
          if (bDEBUG) Serial.println("   <- Received Payload:");
          if (bDEBUG) Serial.println(payload);
          if (bDEBUG) Serial.println("   <- Payload (httpcode: 201):");
          

         //Hnadle the HTTP Code
        if (httpCode == 200) {
          if (bDEBUG) Serial.println("  <- 200: Invalid API Call/Response Code");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 201) {
          if (bDEBUG) Serial.println("  <- 201: The resource was created. The Response Location HTTP header SHOULD be returned to indicate where the newly created resource is accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 202) {
          if (bDEBUG) Serial.println("  <- 202: Is used for asynchronous processing to indicate that the server has accepted the request but the result is not available yet. The Response Location HTTP header may be returned to indicate where the created resource will be accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 400) {
          if (bDEBUG) Serial.println("  <- 400: The server cannot process the request (such as malformed request syntax, size too large, invalid request message framing, or deceptive request routing, invalid values in the request) For example, the API requires a numerical identifier and the client sent a text value instead, the server will return this status code.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 401) {
          if (bDEBUG) Serial.println("  <- 401: The request could not be authenticated.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 403) {
          if (bDEBUG) Serial.println("  <- 403: The request was authenticated but is not authorised to access the resource.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 404) {
          if (bDEBUG) Serial.println("  <- 404: The resource was not found.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 415) {
          if (bDEBUG) Serial.println("  <- 415: This status code indicates that the server refuses to accept the request because the content type specified in the request is not supported by the server");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 422) {
          if (bDEBUG) Serial.println("  <- 422: This status code indicates that the server received the request but it did not fulfil the requirements of the back end. An example is a mandatory field was not provided in the payload.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 500) {
          if (bDEBUG) Serial.println("  <- 500: An internal server error. The response body may contain error messages.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }

        
      } else {
        if (bDEBUG) Serial.println("   <- Unknown Return Code (ERROR): " + httpCode);
        //if (bDEBUG) Serial.printf("    " + http.errorToString(httpCode).c_str());
        
      }

    }

    if (bDEBUG) Serial.print("\n\n");

    delay(POST_DATA_DELAY);
  }

Here is a screenshot of the Arduino IDE Serial Monitor debugging the code

Serial Monitor

Here is a screenshot of the NodeJS API on the raspberry Pi accepting the POSTed data from the ESP8266

API receiving data

Here is a sneak peek of the code accpeing the Posted Data

API COde

The final code will be open sourced.

API with 2x sensors (18x more soon)

I built 2 sensors (on Breadboards) to start hitting the API

2 sensors on a breadboard

18 more sensors are ready for action (after I get tempporary USB power sorted)

18x Sensors

PiJuice and Battery save the Day

I accidentally used my Pi for a few hours (to develop the API) and I realised the power to the PiJuice was not connected.

The PiJuice worked a treat and supplied the Pi from battery

Battery power was disconnected

I plugged in the battery after 25% was drained.

Power Restored/

Research and Setup TRIM/Defrag on the M.2 SSD

Todo: Research

Add a Buzzer to the RaspBerry Pi and Connect to Pi Juice No Power Event

Todo

Wire Up a Speaker to the PiJuice

Todo: Figure out cusrom scripts and add a Piezo Speaker to the PiJuice to alert me of issues in future.

Add buttons to the enclosure

Todo

Add email alerts from the system

I logged into Google G-Suite (my domain’s email provider) and set up an email alias for my domain “[email protected]”, I added this alias to GMail (logged in with my GSuite account.

I created an app-specific password at G-Suite to allow my poi to use a dedicated password to access my email.

I installed these packages on the Raspberry Pi

sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail    

I can run this command to send an email to my primary email

sudo sendemail -f [email protected] -t [email protected]_domain.com -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected]_domain.com -xp ********************

The email arrives from the Raspberry Pi

Test Email Screenshot

PiJuice Alerts (email)

In created some python scripts and configured PiJuice to Email Me

user scripts

I assigned the scripts to Events

Added functions

Python Script (CronJob) to email the batteruy level every 6 hours

Todo

Building the Co2/PM2.5 Sensors

Todo: (Waiting for parts)

The AirGradient PCB’s have arrived

Air Gradient PCB's

NodeJS API writing to MySQL/Influx etc

Todo: Save Data to a Database

Setup 20x WeMos External Antennae’s (DONE, I ordered new factory rotated resistors)

I assumed the external antennae’s on the WeMos D1 Mini Pro’s were using the external antennae. Wrong.

I have to move 20x resistors (1 per WeMos) to switch over the the external antennae.

This will be fun as I added hot glue over the area where the resistior is to hold down the antennae.

Reading configuration files via SPIFFS

Todo

Power over Ethernet (PoE) (SKIP, WIll use plain old USB wall plugs)

Todo: Passive por PoE

Building a C# GUI for the Touch Panel

Todo (Started coding this)

Todo (Passive POE, 5v, 3.3v)?

Building the enclosures for the sensorsDesigned and ordered the PCB, FIrmware next.

Custom PCB?

Yes, See above

Backing up the Raspberry Pi M.2 Drive

This is quite easy as the M.2 Drive is connected to a USB Pliug. I shutdown the Pi and pugged int he M.2 board to my PC

I then Backed up the entire disk to my PC with Acronis Software (review here)

I now have a complete backup of my Pi on a remote network share (and my primary pc).

Version History

v0.9.63 – PCB v0.2 Designed and orderd.

v0.9.62 – 3/2/2022 Update

v0.9.61 – New Nginx, PHP, MySQL etc

v0.9.60 – Fresh Bullseye install (Buster upgrade failed)

v0.951 Email Code in PiJUice

v0.95 Added Email Code

v0.94 Added Todo Areas.

v0.93 2x Sensors hitting the API, 18x sensors ready, Air Gradient

v0.92 DHT22 and Basic API

v0.91 Password Protection

v0.9 Final Battery Setup

v0.8 OTA Updates

v0.7 Screen Enclosure

v0.6 Added Wifi Test Info

v0.5 Initial Post

Filed Under: Analytics, API, Arduino, Cloud, Code, GUI, IoT, Linux, MySQL, NGINX, NodeJS, OS Tagged With: api, ESP8266, MySQL, nginx, raspberry pi, WeMos

Recovering a Dead Nginx, Mysql, PHP WordPress website

July 10, 2021 by Simon

(laptrinhx.com – do not steal this post)

In early 2021 www.fearby.com died and it was all my fault. Here is my breakdown of the events.

On the 4th of January 2021, I woke to see my website not loading.

Cloud flare reporting my websitre was unavailable.

https://www.fearby.com had 2 servers.

  • Web server (www.fearby.com)
  • Database server (db.fearby.com)

Upon investigating why my website was down I found out that WordPress could not talk to the database server. I tried to log into the database server via SSH failed (no response). I tried logging into my db.fearby.com server via the root console and it too did not work.

I was locked out of my own server and memory told me it was caused by my be playing with fail2ban and other system auditing tools a few months earlier.

I tried restoring the db.fearby.com serer from backups (one at a time). I had the last 7 days as individual backups.  I had no luck, all of my backups were no good (I sat on the problem too long).

View of the last 7 days of backups.

In mid-2020 I locked myself out of db.fearby.com (SSH and root console) because I setup an aggressive fail2ban, AIDE intrusion detection system(s) and firewall rules. I could no longer access my db.fearby.com server via SSH or the root console.  The database server was still operational and I foolishly left it running (with no access).

I did not know how to (or had enough time) reset the root password on the Debian server. I was unable to think of a fix to restore my website. I should have reset the root password, it is easy to do thanks to a post from Janne Roustemaa – How to reset root password on cloud server.

A few months ago I finally found out how to reset the root password of a Debian server.

How to Reset the Root Password on a Debian server on UpCloud

I followed Janne Roustemaa’s guide here: How to reset root password on cloud server.

I logged into the Up Cloud Hub.

UpCloud Hub (login page)

I shut down db.fearby.com from the UpCloud Dashboard.

I created a backup of db.fearby.com (just in case). I upgraded my backup plan from 1 backup every 7 days to daily backups for 7 days and weekly backups for 1 month.

Database Backup plan selection

Deploy a temporary server to reset the root password

I deployed a new temporary (cheap) server alongside the dead server in Chicago.

Get $25 free credit on UpCloud and deploy your own server: Use this link to get $25 credit (new UpCloud users only).

Deploy $5/m server in Chicago.

I called the server “recovery.fearby.com” and set Debian 9 and the Operation System (same as the dead server).

Name: Recovery.fearby.com, Debian 9

I added the command “shutdown -h 1” to the Initialization script to ensure the server shuts down after it was deployed. 

I can only add the disk to the new server if the old db.fearby.com server is shut down.

Deploy server

I shut down db.fearby.com and recovery.fearby.com servers.

Shutting down servers GUI

Both servers have shut down

Shutdown

Detach the disk from db.fearby.com

I detach the disk from db.fearby.com server.

In the UpCloud Dashboard, I opened the db.fearby.com and clicked the resize tab

Resize Disk

I clicked the Detach button.

Detach Disk

I clicked Continue

Continue

Attach db.fearby.com disk to recovery.fearby.com

Now I attached this disk as a secondary disk onto the recovery.fearby.com server.

In the UpCloud hub I clicked Servers then selected the recovery.fearby.com server, then clicked the Resize Tab

Resize recovery.fearby.com

I scrolled down and clicked Attach existing storage

Attach Disk

Attach existing storage dialogue

Attach device Dialog

I selected the system disk from the db.fearby.com (that I detached earlier)

Attach Disk

I clicked Add a storage device button

Attach Disk dialog

Now I have attached the storage from db.fearby.com and attached it to recovery.fearby.com as a secondary disk.

2 Disks attached

Starting the recovery.fearby.com server

I started the recovery.fearby.com server by clicking Start

Start

The server is starting

Starting

When the server started, I obtained its IP and connect to it with MobaXTerm.

MobaXTerm SSH Client.

Now I can access the db.fearby.com disk.

I do not want to reset the root password until I undelete the files I need.

Viewing Disks

I ran this command to verify the attached disks.

lsblk

Two disks were visible

2 disk were visible.

Alternatively, I can view partitions with the following command

cat /proc/partitions
major minor  #blocks  name
 254        0   26214400 vda
 254        1   26213376 vda1
 254       16   52428800 vdb
 254       17   52427776 vdb1

I can see partition data with these commands

recovery.fearby.com disk: /dev/vda1

fdisk -l /dev/vda1
Disk /dev/vda1: 25 GiB, 26842497024 bytes, 52426752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

db.fearby.com disk: /dev/vdb1

fdisk -l /dev/vdb1
Disk /dev/vdb1: 50 GiB, 53686042624 bytes, 104855552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I ran this command to mount the second disk

mount /dev/vdb1 /mnt

I looked in the “/mnt/Backup” folder  as this was where I had daily MySQL dumps saving too on the old system.

The folder was empty. It looks like the file system was corrupted.

Finding Deleted Files

I assumed the file system was corrupt, I installed Testdisk as I wanted to recover deleted SQL dump backups in the backup folder. 

sudo apt-get update
sudo apt-get install testdisk

I confirmed testdisk was installed

photorec --version

I ran testdisk and passed in the disk as a parameter

sudo photorec /dev/vdb1

I selected the Disk /dev/vdb1 – 53GB and pressed enter

I selected ext4 partition and pressed enter (not whole disk)

I selected ext2/ext3/ext4 filesystem and pressed enter

I selected Free to only scan in unallocated space and pressed enter

When asked to choose the recovery location to restore files to I selected /recovery (on the recovery.fearby.com disk , not the db.fearby.com disk.

I just realized that the recovery.fearby.com disk is 25Gb and the db.fearby.com disk is 50GB, lets hope I do not have more than 25GB of deleted files (or I will have to delete recovery.fearby.com and deploy a 100GB system) and run undelete again.

Recovered Files

After 20 minutes 32,809 files were recovered to /recovery/recup_dir

I pressed CTRL+C to exit photorec

I changed directory to /recovery

cd /recovery

I counted the recovered files

find . -type f | wc -l
>32810

Recovered Files were placed in sub folders.

drwxr-xr-x 68 root root  4096 Jan 19 13:28 .
drwxr-xr-x 23 root root  4096 Jan 19 13:28 ..
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.1
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.10
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.11
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.12
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.13
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.14
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.15
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.16
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.17
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.18
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.19
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.2
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.20
drwxr-xr-x  2 root root 24576 Jan 19 13:22 recup_dir.21
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.22
drwxr-xr-x  2 root root 24576 Jan 19 13:22 recup_dir.23
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.24
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.25
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.26
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.27
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.28
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.29
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.3
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.30
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.31
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.32
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.33
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.34
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.35
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.36
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.37
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.38
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.39
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.4
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.40
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.41
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.42
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.43
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.44
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.45
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.46
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.47
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.48
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.49
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.5
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.50
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.51
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.52
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.53
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.54
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.55
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.56
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.57
drwxr-xr-x  2 root root 24576 Jan 19 13:26 recup_dir.58
drwxr-xr-x  2 root root 36864 Jan 19 13:26 recup_dir.59
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.6
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.60
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.61
drwxr-xr-x  2 root root 20480 Jan 19 13:27 recup_dir.62
drwxr-xr-x  2 root root 20480 Jan 19 13:27 recup_dir.63
drwxr-xr-x  2 root root 20480 Jan 19 13:27 recup_dir.64
drwxr-xr-x  2 root root 20480 Jan 19 13:28 recup_dir.65
drwxr-xr-x  2 root root 12288 Jan 19 13:28 recup_dir.66
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.7
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.8
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.9
66 folders were recovered

I calculated the folder size

sudo  du -sh /recovery
13G     /recovery

I immediately started downloading ALL recovered files (13GB) to my local PC with MobaXTerm

On recovery.fearby.com I searched for any trace of recovered *.sql files that I was dumping daily via cron jobs

find /recovery -name "*.sql"

Many pages of recovered sql files were listed

/recovery/recup_dir.49/f36487168.sql
/recovery/recup_dir.49/f35110912.sql
/recovery/recup_dir.49/f36667392.sql
/recovery/recup_dir.49/f36995072.sql
/recovery/recup_dir.49/f35667968.sql
/recovery/recup_dir.49/f35078144.sql
/recovery/recup_dir.49/f37535744.sql
/recovery/recup_dir.49/f36913152.sql
/recovery/recup_dir.49/f33996800.sql
/recovery/recup_dir.49/f36421632.sql
/recovery/recup_dir.49/f35061760.sql
/recovery/recup_dir.49/f36143104.sql
/recovery/recup_dir.49/f36618240.sql
/recovery/recup_dir.49/f34979840.sql
/recovery/recup_dir.49/f37273600.sql
/recovery/recup_dir.49/f35995648.sql
/recovery/recup_dir.49/f36241408.sql
/recovery/recup_dir.49/f37732360.sql
/recovery/recup_dir.49/f34603008.sql
/recovery/recup_dir.49/f33980416.sql
/recovery/recup_dir.1/f0452896.sql
/recovery/recup_dir.1/f0437184.sql
/recovery/recup_dir.1/f0211232.sql
/recovery/recup_dir.17/f16547840.sql
/recovery/recup_dir.17/f16252928.sql
/recovery/recup_dir.17/f15122432.sql
/recovery/recup_dir.17/f17159720.sql
/recovery/recup_dir.17/f15089664.sql
/recovery/recup_dir.17/f15958016.sql
/recovery/recup_dir.17/f15761408.sql
/recovery/recup_dir.57/f69582848.sql
/recovery/recup_dir.57/f69533696.sql
/recovery/recup_dir.57/f69173248.sql
/recovery/recup_dir.57/f68321280.sql
/recovery/recup_dir.57/f70483968.sql
/recovery/recup_dir.57/f70746112.sql
/recovery/recup_dir.57/f68730880.sql
/recovery/recup_dir.57/f67862528.sql
/recovery/recup_dir.57/f70123520.sql
/recovery/recup_dir.57/f68337664.sql
/recovery/recup_dir.57/f70172672.sql
/recovery/recup_dir.57/f71057408.sql
/recovery/recup_dir.57/f68796416.sql
/recovery/recup_dir.57/f70533120.sql
/recovery/recup_dir.57/f69419008.sql
/recovery/recup_dir.57/f68239360.sql
/recovery/recup_dir.57/f69779456.sql
/recovery/recup_dir.57/f68255744.sql
/recovery/recup_dir.57/f67764224.sql
/recovery/recup_dir.57/f71204864.sql
/recovery/recup_dir.57/f70336512.sql
/recovery/recup_dir.57/f68501504.sql
/recovery/recup_dir.57/f67944448.sql
/recovery/recup_dir.50/f39059456.sql
/recovery/recup_dir.50/f38518784.sql
/recovery/recup_dir.50/f40206336.sql
/recovery/recup_dir.50/f40927232.sql
/recovery/recup_dir.50/f39485440.sql
/recovery/recup_dir.50/f39092224.sql
/recovery/recup_dir.50/f40861696.sql
/recovery/recup_dir.50/f39731200.sql
/recovery/recup_dir.50/f40337408.sql
/recovery/recup_dir.50/f38862848.sql
/recovery/recup_dir.50/f41664512.sql
/recovery/recup_dir.50/f41074688.sql
/recovery/recup_dir.50/f40828928.sql
/recovery/recup_dir.50/f41713664.sql
/recovery/recup_dir.50/f38092800.sql
/recovery/recup_dir.50/f39878656.sql
/recovery/recup_dir.50/f38305792.sql
/recovery/recup_dir.50/f38830080.sql
/recovery/recup_dir.50/f39534592.sql
/recovery/recup_dir.50/f39813120.sql
/recovery/recup_dir.50/f40435712.sql
/recovery/recup_dir.50/f41467904.sql
/recovery/recup_dir.50/f37901728.sql
/recovery/recup_dir.50/f38682624.sql
/recovery/recup_dir.50/f38191104.sql
/recovery/recup_dir.50/f38174720.sql
/recovery/recup_dir.50/f40878080.sql
//and many more

It would take a few hours to download 32,000 files from the other side of the world. Next time I deploy fearby.com, I will deploy it to Sydney Australia.

It looks like MobaXTerm does not copy to the selected destination that I selected when dragging and dropping files, Being impatient I scanned my system for a file that MobaXTerm had copied. MobaXTerm saves download to “%Documents%\MobaXterm\splash\tmp\dragdrop\”.

Sql file contents

I opened some of the recovered SQL files and it looks like all files were partial and were not the whole database backup. A compete mysql dump should be over 200 lines long. Dang.

I downloaded all files I could from the these folders

  • /mnt/etc/nginx
  • /mnt/var/lib/mysql
  • /mnt/Scripts

Disappointed, I sat on things for a few weeks, I thought I had lost my website.

Try 2 – Reinstall a fresh db.fearby.com

After I copied files from the old db.fearby.com serve to recovery.fearby.com, I reattached the storage to the old db.fearby.com system.

In vein, I tried mending the broken MySQL service on db.fearby.com. I tried uninstalling and reinstalling MySql on db.fearby.com (each time no luck). I was having trouble with MySQL not starting.

I had too many rabbit holes (mysql errors) to list. I thought my database was corrupt.

I was kicking myself for letting my access to db.fearby.com lapse, I was kicking myself for not having more backups.

Try 3 – Reinstall MySQL

I tried deploying a new server and setting it up, maybe from the frustration I did not do it correctly. I had HTTPS issues with CloudFlare. I gave up for a few months

Try 4 – Check for Database Corruption

I downloaded DiskInternals – MySQL Recovery and scanned my database. To my amazement, it reported no database corruption.

All tables loaded

Maybe I can recover my website?

Try 5 – Using RunCloud.io to deploy a website

Having failed with setting up a server from scratch, a friend (Hi Zach) said I should try RunCloud to deploy a server.

I tried to document everything from attaching RunCloud to UpCloud and Cloudflare’s API to deploying a server.

Long story short, RunCloud is not for me.  I had too many issues with RunCloud and IPV6, CloudFlare API Integration, No SSH access to my server, no Nginx editing capabilities with RunCloud.

RunCloud errors

I deleted the RunCloud deployed server.

Try 6 – 10 minutes deploying a serer by hand

I ended up deploying a server in 10 minutes manually without taking notes.

Summary

I deployed a server (this time to Sydney).

I Installed the UFW firewall (configured and started)

sudo apt-get install ntp

Set the date and time

sudo timedatectl set-timezone Australia/Sydney

Installed ntp time server

sudo apt-get install ntp

I Installed PHP 7.4 and PHP 7.4 FPM (I cheated and googled: https://www.cloudbooklet.com/install-php-7-4-on-debian-10/ )

I edited PHP Config

sudo nano /etc/php/7.4/fpm/php.ini

Installed NGINX webserver 

sudo apt-get install nginx

Configured NGINX (I got an CLoudflare to NGINX SSL Certificate), I also sighned up for a Cloud flare SSL certificate

sudo nano /etc/nginx/nginx.conf

Contents

worker_cpu_affinity auto;
worker_rlimit_nofile 100000;
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
         multi_accept on;
}

http {
        root /www-root;
        client_max_body_size 10M;

        proxy_connect_timeout 1200s;
        proxy_send_timeout 1200s;
        proxy_read_timeout 1200s;
        fastcgi_send_timeout 1200s;
        fastcgi_read_timeout 1200s;

        ##
        # Basic Settings
        ##
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;

        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##


        gzip on;
        gzip_disable "msie6";

        gzip_vary on;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;

        gzip_types text/plain application/xml;
        gzip_min_length 256;
        gzip_proxied no-cache no-store private expired auth;


        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

I edited sudo nano /etc/nginx/sites-available/default

Contents

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # SSL configuration
        #
        listen 443 ssl default_server;
        listen [::]:443 ssl default_server;

        ssl on;
        ssl_certificate /path/to/ssl/certs/cert.pem;
        ssl_certificate_key /path/to/ssl/private/key.pem;

        root /path-to-www;

        # Add index.php to the list if you are using PHP
        
        index index.html index.php;

        server_name fearby.com;

        #Security Headers
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header Referrer-Policy "no-referrer-when-downgrade";
        add_header X-XSS-Protection "1; mode=block" always;
        add_header X-Frame-Options SAMEORIGIN always;
        add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # DENY RULES
        location ~ /\.ht {
                deny all;
        }
        location ~ ^/\.user\.ini {
                deny all;
        }
        location ~ (\.ini) {
                return 403;
        }

        if ($http_referer ~* "laptrinhx.com") {
                return 404;
        }

        if ($http_referer ~* "bdev.dev") {
                return 404;
        }

        if ($http_referer ~* "raoxyz.com") {
                return 404;
        }

        if ($http_referer ~* "congtyaz.com") {
                return 404;
        }


        location / {
                try_files $uri $uri/ /index.php?$args;
        }

        location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/run/php/php7.4-fpm.sock;
        }

        # DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;
}

I tested Nginx and PHP.

I installed MySQL (I Googled a guide)

I created a database, database user and assigned permissions for my blog.

I installed the WordPress CLI tool

I installed WordPress the using the wp-cli tool

wp core install --url=example.com --title=Example --admin_user=supervisor --admin_password=strongpassword [email protected]

When I had a blank WordPress I uploaded the blog folder that I backed up before.

I ran this command to allow Nginx to read the backed up website files

sudo chown -R www-data:www-data /path-to-www

I also uploaded the backup of my mysql database to /var/lib/mysql/oldblogdatabase

I ran this command to allow mysql to read the backed up database

sudo chown -R mysql:mysql /var/lib/mysql/oldblogdatabase

I also uploaded the following files to /var/lib/mysql

TIP: These files are very important to restore.  You cannot just copy a database in a subfolder.

  • ibdata1
  • ib_logfile0
  • ib_logfile1
  • ib_logfile1
  • ibtmp1

I ran this command to allow mysql ro read the ib* files

sudo chown -R mysql:mysql /var/lib/mysql/ib*

I was able to load my old website

Blog Up

I still have some issues to solve but it is back.

Lessons Learned

  1. Save all passwords and have backup accounts and roll back before working backups are gone.
  2. Setup a Dev Test, Pre Prod Environment and do no test on production servers.
  3. Do not delay disaster recovery actions.
  4. Do not rely on automation.
  5. Have more than a weeks worth of backups.

Get $25 free credit on UpCloud and deploy your own server: Use this link to get $25 credit (new UpCloud users only).

 

Change Log

Version 1.2

Filed Under: Uncategorized Tagged With: 404, MySQL, nginx, php, website

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.

December 22, 2020 by Simon

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance. Here is what I did to set up a complete Ubuntu 18.04 system (NGINX, PHP, MySQL, WordPress etc). This is not a paid review (just me documenting my steps over 2 days).

Background (CPanel hosts)

In 1999 I hosted my first domain (www.fearby.com) on a host in Seattle (for $10 USD a month), the host used CPanel and all was good.  After a decade I was using the domain more for online development and the website was now too slow (I think I was on dial-up or ADSL 1 at the time). I moved my domain to an Australian host (for $25 a month).

After 8 years the domain host was sold and performance remained mediocre. After another year the new host was sold again and performance was terrible.

I started receiving Resource Limit Is Reached warnings (basically this was a plot by the new CPanel host to say “Pay us more and this message will go away”).

Page load times were near 30 seconds.

cpenal_usage_exceeded

The straw that broke the camel’s back was their demand of $150/year for a dodgy SSL certificate.

I needed to move to a self-managed server where I was in control.

Buying a Domain Name

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Self Managed Server

I found a good web IDE ( http://www.c9.io/ ) that allowed me to connect to a cloud VM.  C9 allowed me to open many files and terminal windows and reconnect to them later. Don’t get excited, though, as AWS has purchased C9 and it’s not the same.

C9 IDE

C9 IDE

I spun up a Digital Ocean Server at the closest data centre in Singapore. Here was my setup guide creating a Digital Ocean VM, connecting to it with C9 and configuring it. I moved my email to G Suite and moved my WordPress to Digital Ocean (other guides here and here).

I was happy since I could now send emails via CLI/code, set up free SSL certs, add second domain email to G Suite and Secure G Suite. No more usage limit errors either.

Self-managing servers require more work but it is more rewarding (flexible, faster and cheaper).  Page load times were now near 20 seconds (10-second improvement).

Latency Issue

Over 6 months, performance on Digital Ocean (in Singapore) from Australia started to drop (mentioned here).  I tried upgrading the memory but that did not help (latency was king).

Moved the website to Australia

I moved my domain to Vultr in Australia (guide here and here). All was good for a year until traffic growth started to increase.

Blog Growth

I tried upgrading the memory on Vultr and I setup PHP child workers, set up Cloudflare.

GT Metrix scores were about a “B” and Google Page Speed Scores were in the lower 40’s. Page loads were about 14 seconds (5-second improvement).

Tweaking WordPress

I set up an image compression plugin in WordPress then set up a cloud image compression and CDN Plugin from the same vendor.  Page Speed info here.

GT Metrix scores were now occasionally an “A” and Page Speed scores were in the lower 20’s. Page loads were about 3-5 seconds (10-second improvement).

A mixed bag from Vultr (more optimisation and performance improvements were needed).

This screenshot is showing poor www.gtmetrix.com scores , pool google page speed index scores and upgrading from 1GB to 2GB memory on my server.

Google Chrome Developer Console Audit Results on Vultr hosted website were not very good (I stopped checking as nothing helped).

This is a screenshot showing poor site performance (screenshot taken in Google Dev tools audit feature)

The problem was the Vultr server (400km away in Sydney) was offline (my issue) and everything above (adding more memory, adding 2x CDN’s (EWWW and Cloudflare), adding PHP Child workers etc) did not seem to help???

Enter UpCloud…

Recently, a friend sent a link to a blog article about a host called “UpCloud” who promised “Faster than SSD” performance.  This can’t be right: “Faster than SSD”? I was intrigued. I wanted to check it out as I thought nothing was faster than SSD (well, maybe RAM).

I signed up for a trial and ran a disk IO test (read the review here) and I was shocked. It’s fast. Very fast.

Summary: UpCloud was twice as fast (Disk IO and CPU) as Vultr (+ an optional $4/m firewall and $3/m for 1x backup).

This is a screenshot showing Vultr.com servers getting half the read and write disk io performance compared to upcloud.com.

fyi: Labels above are K Bytes per second. iozone loops through all file size from 4 KB to 16,348 KB and measures through the reads per second. To be honest, the meaning of the numbers doesn’t interest me, I just want to compare apples to apples.

This is am image showing iozone results breakdown chart (kbytes per sec on vertical axis, file size in horizontal axis and transfer size on third access)

(image snip from http://www.iozone.org/ which explains the numbers)

I might have to copy my website on UpCloud and see how fast it is.

Where to Deploy and Pricing

UpCloud Pricing: https://www.upcloud.com/pricing/

UpCloud Pricing

UpCloud does not have a data centre in Australia yet so why choose UpCloud?

Most of my site’s visitors are based in the US and UpCloud have disk IO twice as fast as Vultr (win-win?).  I could deploy to Chicago?

This image sows most of my visitors are in the US

My site’s traffic is growing and I need to ensure the site is fast enough in the future.

This image shows that most of my sites visitors are hitting my site on week days.

Creating an UpCloud VM

I used a friend’s referral code and signed up to create my first VM.

FYI: use my Referral code and get $25 free credit.  Sign up only takes 2 minutes.

https://www.upcloud.com/register/?promo=D84793

When you click the link above you will receive 25$ to try out serves for 3 days. You can exit his trail and deposit $10 into UpCloud.

Trial Limitations

The trial mode restrictions are as following:

* Cloud servers can only be accessed using SSH, RDP, HTTP or HTTPS protocols
* Cloud servers are not allowed to send outgoing e-mails or to create outbound SSH/RDP connections
* The internet connection is restricted to 100 Mbps (compared to 500 Mbps for non-trial accounts)
* After your 72 hours free trial, your services will be deleted unless you make a one-time deposit of $10

UpCloud Links

The UpCloud support page is located here: https://www.upcloud.com/support/

  • Quick start: Introduction to UpCloud
  • How to deploy a Cloud Server
  • Deploy a cloud server with UpCloud’s API

More UpCloud links to read:

  • Two-Factor Authentication on UpCloud
  • Floating IPs on UpCloud
  • How to manage your firewall
  • Finalizing deployment

Signing up to UpCloud

Navigate to https://upcloud.com/signup and add your username, password and email address and click signup.

New UpCloud Signup Page

Add your address and payment details and click proceed (you don’t need to pay anything ($1 may be charged and instantly refunded to verify the card)

Add address and payment details

That’s it, check yout email.

Signup Done

Look for the UpCloud email and click https://my.upcloud.com/

Check Email

Now login

Login to UpCloud

Now I can see a dashboard 🙂

UpCloud Dashboard

I was happy to see 24/7 support is available.

This image shows the www.upcloud.com live chat

I opted in for the new dashboard

UpCloud new new dashboard

Deploy My First UpCloud Server

This is how I deployed a server.

Note: If you are going to deploy a server consider using my referral code and get $25 credit for free.

Under the “deploy a server” widget I named the server and chose a location (I think I was supposed to use an FQDN name -e.g., “fearby.com”). The deployment worked though. I clicked continue, then more options were made available:

  1. Enter a short server description.
  2. Choose a location (Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago)
  3. Choose the number of CPU’s and amount of memory
  4. Specify disk number/names and type (MaxIOPS or HDD).
  5. Choose an Operating System
  6. Select a Timezone
  7. Define SSH Keys for access
  8. Allowed login methods
  9. Choose hardware adapter types
  10. Where the send the login password

Deploy Server

FYI: How to generate a new SSH Key (on OSX or Ubuntu)

ssh-keygen -t rsa

Output

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /temp/example_rsa
Enter passphrase (empty for no passphrase): *********************************
Enter same passphrase again:*********************************
Your identification has been saved in /temp/example_rsa.
Your public key has been saved in /temp/example_rsa.pub.
The key fingerprint is:
SHA256:########################### [email protected]
Outputted public and private key

Did the key export? (yes)

> /temp# ls /temp/ -al
> drwxr-xr-x 2 root root 4096 Jun 9 15:33 .
> drwxr-xr-x 27 root root 4096 Jun 8 14:25 ..
> -rw——- 1 user user 1766 Jun 9 15:33 example_rsa
> -rw-r–r– 1 user user 396 Jun 9 15:33 example_rsa.pub

“example_rsa” is the private key and “example_rsa.pub “is the public key.

  • The public key needs to be added to the server to allow access.
  • The private key needs to be added to any local ssh program used for remote access.

Initialisation script (after deployment)

I was pleased to see an initialization script section that calls actions after the server is deployed. I configured the initialisation script to pull down a few GB of backups from my Vultr website in Sydney (files now removed).

This was my Initialisation script:

#!/bin/bash
echo "Downloading the Vultr websites backups"
mkdir /backup
cd /backup
wget -o www-mysql-backup.sql https://fearby.com/.../www-mysql-backup.sql
wget -o www-blog-backup.zip https://fearby.com/.../www-blog-backup.zip

Confirm and Deploy

I clicked “Confirm and deploy” but I had an alert that said trial mode can only deploy servers up to 1024MB of memory.

This image shows I cant deploy servers with 2/GB in trial modeExiting UpCloud Trial Mode

I opened the dashboard and clicked My Account then Billing, I could see the $25 referral credit but I guess I can’t use that in Trial.

I exited trial mode by depositing $10 (USD).

View Billing Details

Make a manual 1-time deposit of $10 to exit trial mode.

Deposit $10 to exit the trial

FYI: Server prices are listed below (or view prices here).

UpCloud Pricing

Now I can go back and deploy the server with the same settings above (1x CPU, 2GB Memory, Ubuntu 18.04, MaxIOPS Storage etc)

Deployment takes a few minutes and depending on how you specified a password may be emailed to you.

UpCloud Server Deployed

The server is now deployed; now I can connect to it with my SSH program (vSSH).  Simply add the server’s IP, username, password and the SSH private key (generated above) to your ssh program of choice.

fyi: The public key contents start with “ssh-rsa”.

This image shows me connecting to my sever via ssh

I noticed that the initialisation script downloaded my 2+GB of files already. Nice.

UpCloud Billing Breakdown

I can now see on the UpCloud billing page in my dashboard that credit is deducted daily (68c); at this rate, I have 49 days credit left?

Billing Breakdown

I can manually deposit funds or set up automatic payments at any time 🙂

UpCloud Backup Options

You do not need to setup backups but in case you want to roll back (if things stuff up), it is a good idea. Backups are an additional charge.

I have set up automatic daily backups with an auto deletion after 2 days

To view backup scheduled click on your deployed server then click backup

List of UpCloud Backups

Note: Backups are charged at $0.056 for every GB stored – so $5.60 for every 100GB per month (half that for 50GB etc)

You can take manual backups at any time (and only be charged for the hour)

UpCloud Firewall Options

I set up a firewall at UpCloud to only allow the minimum number of ports (UpCloud DNS, HTTP, HTTPS and My IP to port 22).  The firewall feature is charged at $0.0056 an hour ($4.03 a month)

I love the ability to set firewall rules on incoming, destination and outgoing ports.

To view your firewall click on your deployed server then click firewall

UpCloud firewall

Update: I modified my firewall to allow inbound ICMP (IPv4/IPv6) and UDP (IPv4/IPv6) packets.

(Note: Old firewall screenshot)

Firewall Rules Allow port 80, 443 and DNS

Because my internet provider has a dynamic IP, I set up a VPN with a static IP and whitelisted it for backdoor access.

Local Ubuntu ufw Firewall

I duplicated the rules in my local ufw (2nd level) firewall (and blocked mail)

sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] 25                         DENY OUT    Anywhere                   (out)
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 22                         ALLOW IN    REMOVED (MY WHITELISTED IP))
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 25 (v6)                    DENY OUT    Anywhere (v6)              (out)
[10] 53                         ALLOW IN    2a04:3540:53::1
[11] 53                         ALLOW IN    2a04:3544:53::1

UpCloud Download Speeds

I pulled down a 1.8GB Ubuntu 18.08 Desktop ISO 3 times from gigenet.com and the file downloaded in 32 seconds (57MB/sec). Nice.

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:04-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso'

ubuntu-18.04-desktop-amd64.iso 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:02:37 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:46-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.1'

ubuntu-18.04-desktop-amd64.iso.1 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:19 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.1' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:03:23-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.2'

ubuntu-18.04-desktop-amd64.iso.2 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:56 (56.8 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.2' saved [1921843200/1921843200]

Install Common Ubuntu Packages

I installed common Ubuntu packages.

apt-get install zip htop ifstat iftop bmon tcptrack ethstatus speedometer iozone3 bonnie++ sysbench siege tree tree unzip jq jq ncdu pydf ntp rcconf ufw iperf nmap iozone3

Timezone

I checked the server’s time (I thought this was auto set before I deployed)?

$hwclock --show
2018-06-06 23:52:53.639378+0000

I reset the time to Australia/Sydney.

dpkg-reconfigure tzdata
Current default time zone: 'Australia/Sydney'
Local time is now: Thu Jun 7 06:53:20 AEST 2018.
Universal Time is now: Wed Jun 6 20:53:20 UTC 2018.

Now the timezone is set 🙂

Shell History

I increased the shell history.

HISTSIZEH =10000
HISTCONTROL=ignoredups

SSH Login

I created a ~/.ssh/authorized_keys file and added my SSH public key to allow password-less logins.

mkdir ~/.ssh
sudo nano ~/.ssh/authorized_keys

I added my pubic ssh key, then exited the ssh session and logged back in. I can now log in without a password.

Install NGINX

apt-get install nginx

nginx/1.14.0 is now installed.

A quick GT Metrix test.

This image shows awesome static nginx performance ratings of of 99%

Install MySQL

Run these commands to install and secure MySQL.

apt install mysql-server
mysql_secure_installation

Securing the MySQL server deployment.
> Would you like to setup VALIDATE PASSWORD plugin?: n
> New password: **********************************************
> Re-enter new password: **********************************************
> Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
> Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
> Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
> Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
> Success.

I disabled the validate password plugin because I hate it.

MySQL Ver 14.14 Distrib 5.7.22 is now installed.

Set MySQL root login password type

Set MySQL root user to authenticate via “mysql_native_password”. Run the “mysql” command.

mysql
SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | | auth_socket | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+----------

Now let’s set the root password authentication method to “mysql_native_password”

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '*****************************************';
Query OK, 0 rows affected (0.00 sec)

Check authentication method.

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | ######################################### | mysql_native_password | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+

Now we need to flush permissions.

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Done.

Install PHP

Install PHP 7.2

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php7.2
php -v

PHP 7.2.5, Zend Engine v3.2.0 with Zend OPcache v7.2.5-1 is now installed. Do update PHP frequently.

I made the following changes in /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Install PHP FPM

apt-get install php7.2-fpm

Configure PHP FPM config.

Edit /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Reload php sudo service.

php7.2-fpm restart service php7.2-fpm status

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Configuring NGINX

If you are not comfortable editing NGINX config files read here, here and here.

I made a new “www root” folder, set permissions and created a default html file.

mkdir /www-root
chown -R www-data:www-data /www-root
echo "Hello World" >> /www-root/index.html

I edited the “root” key in “/etc/nginx/sites-enabled/default” file and set the root a new location (e.g., “/www-root”)

I added these performance tweaks to /etc/nginx/nginx.conf

> worker_cpu_affinity auto;
> worker_rlimit_nofile 100000

I add the following lines to “http {” section in /etc/nginx/nginx.conf

client_max_body_size 10M;

gzip on;
gzip_disable "msie6";
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_types
application/atom+xml
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
font/opentype
image/bmp
image/x-icon
text/cache-manifest
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
#text/html is always compressed by gzip module

gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss te$

Check NGINX Status

service nginx status
* nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 21:16:28 AEST; 30min ago
Docs: man:nginx(8)
Main PID: # (nginx)
Tasks: 2 (limit: 2322)
CGroup: /system.slice/nginx.service
|- # nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
`- # nginx: worker process

Install Open SSL that supports TLS 1.3

This is a work in progress. The steps work just fine for me on Ubuntu 16.04. but not Ubuntu 18.04.?

Installing Adminer MySQL GUI

I will use the PHP based Adminer MySQL GUI to export and import my blog from one server to another. All I needed to do is install it on both servers (simple 1 file download)

cd /utils
wget -o adminer.php https://github.com/vrana/adminer/releases/download/v4.6.2/adminer-4.6.2-mysql-en.php

Use Adminer to Export My Blog (on Vultr)

On the original server open Adminer (http) and..

  1. Login with the MySQL root account
  2. Open your database
  3. Choose “Save” as the output
  4. Click on Export

This image shows the export of the wordpress adminer page

Save the “.sql” file.

I used Adminer on the UpCloud server to Import My Blog

FYI: Depending on the size of your database backup you may need to temporarily increase your upload and post sizes limits in PHP and NGINX before you can import your database.

Edit /etc/php/7.2/fpm/php.ini
> max_file_uploads = 100M
> post_max_size =100M

And Edit: /etc/nginx/nginx.conf
> client_max_body_size 100M;

Don’t forget to reload NGINX config and restart NGINX and PHP. Take note of the maximum allowed file size in the screenshot below. I temporarily increased my upload limits to 100MB in order to restore my 87MB blog.

Now I could open Adminer on my UpCloud server.

  1. Create a new database
  2. Click on the database and click Import
  3. Choose the SQL file
  4. Click Execute to import it

Import MuSQL backup with Adminer

Don’t forget to create a user and assign permissions (as required – check your wp-config.php file).

Import MySQL Database

Tip: Don’t forget to lower the maximum upload file size and max post size after you import your database,

Cloudflare DNS

I use Cloudflare to manage DNS, so I need to tell it about my new server.

You can get your server’s IP details from the UpCloud dashboard.

Find IP

At Cloudflare update your DNS details to point to the server’s new IPv4 (“A Record”) and IPv6 (“AAAA Record”).

Cloudflare DNS

Domain Error

I waited an hour and my website was suddenly unavailable.  At first, I thought this was Cloudflare forcing the redirection of my domain to HTTP (that was not yet set up).

DNS Not Replicated Yet

I chatted with UpCloud chat on their webpage and they kindly assisted me to diagnose all the common issues like DNS values, DNS replication, Cloudflare settings and the error was pinpointed to my NGINX installation.  All NGINX config settings were ok from what we could see?  I uninstalled NGINX and reinstalled it (and that fixed it). Thanks UpCloud Support 🙂

Reinstalled NGINX

sudo apt-get purge nginx nginx-common

I reinstalled NGINX and reconfigured /etc/nginx/nginx.conf (I downloaded my SSL cert from my old server just in case).

Here is my /etc/nginx/nginx.conf file.

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/www-nginxcriterror.log crit;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        client_max_body_size 10M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        server_tokens off;

        server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/www-access.log;
        error_log /var/log/nginx/www-error.log;

        gzip on;

        gzip_vary on;
        gzip_disable "msie6";
        gzip_min_length 256;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-available/default file (fyi, I have not fully re-setup TLS 1.3 yet so I commented out the settings)

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-root;

        # Listen Ports
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name www.fearby.com fearby.com localhost;

        # HTTPS Cert
        ssl_certificate /etc/nginx/ssl-cert-path/fearby.crt;
        ssl_certificate_key /etc/nginx/ssl-cert-path/fearby.key;
        ssl_dhparam /etc/nginx/ssl-cert-path/dhparams4096.pem;

        # HTTPS Ciphers
        
        # TLS 1.2
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

        # TLS 1.3			#todo
        # ssl_ciphers 
        # ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DES-CBC3-SHA;
        # ssl_ecdh_curve secp384r1;

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # HTTPS Settings
        server_tokens off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 30m;
        ssl_session_tickets off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
	#ssl_stapling on; 						# Requires nginx >= 1.3.7

        # Cloudflare DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;

        # PHP Memory 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
            try_files $uri =404;
            # include snippets/fastcgi-php.conf;

            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php7.2-fpm.sock;

            # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
            # fastcgi_pass 127.0.0.1:9000;
	    }

        location / {
            # try_files $uri $uri/ =404;
            try_files $uri $uri/ /index.php?q=$uri&$args;
            index index.php index.html index.htm;
            proxy_set_header Proxy "";
        }

        # Deny Rules
        location ~ /.ht {
                deny all;
        }
        location ~ ^/.user.ini {
            deny all;
        }
        location ~ (.ini) {
            return 403;
        }

        # Headers
        location ~* .(?:ico|css|js|gif|jpe?g|png|js)$ {
            expires 30d;
            add_header Pragma public;
            add_header Cache-Control "public";
        }

}

SSL Labs SSL Certificate Check

All good thanks to the config above.

SSL Labs

Install WP-CLI

I don’t like setting up FTP to auto-update WordPress plugins. I use the WP-CLI tool to manage WordPress installations by the command line. Read my blog here on using WP-CLI.

Download WP-CLI

mkdir /utils
cd /utils
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Move WP-CLI to the bin folder as “wp”

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Test wp

wp --info
OS: Linux 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
Shell: /bin/bash
PHP binary: /usr/bin/php7.2
PHP version: 7.2.5-1+ubuntu18.04.1+deb.sury.org+1
php.ini used: /etc/php/7.2/cli/php.ini
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /www-root
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.1

Update WordPress Plugins

Now I can run “wp plugin update” to update all WordPress plugins

wp plugin update
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/plugin/wordfence.7.1.7.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wp-meta-seo.3.7.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wordpress-seo.7.6.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Disabling Maintenance mode...
Success: Updated 3 of 3 plugins.
+---------------+-------------+-------------+---------+
| name | old_version | new_version | status |
+---------------+-------------+-------------+---------+
| wordfence | 7.1.6 | 7.1.7 | Updated |
| wp-meta-seo | 3.7.0 | 3.7.1 | Updated |
| wordpress-seo | 7.5.3 | 7.6.1 | Updated |
+---------------+-------------+-------------+---------+

Update WordPress Core

WordPress core file can be updated with “wp core update“

wp core update
Success: WordPress is up to date.

Troubleshooting: Use the flag “–allow-root “if wp needs higher access (unsafe action though).

Install PHP Child Workers

I edited the following file to setup PHP child workers /etc/php/7.2/fpm/pool.d/www.conf

Changes

> pm = dynamic
> pm.max_children = 40
> pm.start_servers = 15
> pm.min_spare_servers = 5
> pm.max_spare_servers = 15
> pm.process_idle_timeout = 30s;
> pm.max_requests = 500;
> php_admin_value[error_log] = /var/log/www-fpm-php.www.log
> php_admin_value[memory_limit] = 512M

Restart PHP

sudo service php7.2-fpm restart

Test NGINX config, reload NGINX config and restart NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Output (14 workers are ready)

Check PHP Child Worker Status

sudo service php7.2-fpm status
* php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 19:32:47 AEST; 20s ago
Docs: man:php-fpm7.2(8)
Main PID: # (php-fpm7.2)
Status: "Processes active: 0, idle: 15, Requests: 2, slow: 0, Traffic: 0.1req/sec"
Tasks: 16 (limit: 2322)
CGroup: /system.slice/php7.2-fpm.service
|- # php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
- # php-fpm: pool www

Memory Tweak (set at your own risk)

sudo nano /etc/sysctl.conf

vm.swappiness = 1

Setting swappiness to a value of 1 all but disables the swap file and tells the Operating System to aggressively use ram, a value of 10 is safer. Only set this if you have enough memory available (and free).

Possible swappiness settings:

> vm.swappiness = 0 Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition when free memory will be below vm.min_free_kbytes limit, but in later versions, this is achieved by setting to 1.[2]> vm.swappiness = 1 Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
> vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system.[3]
> vm.swappiness = 60 The default value.
> vm.swappiness = 100 The kernel will swap aggressively.

The “htop” tool is a handy memory monitoring tool to “top”

Also, you can use good old “watch” command to show near-live memory usage (auto-refreshes every 2 seconds)

watch -n 2 free -m

Script to auto-clear the memory/cache

As a habit, I am setting up a cronjob to check when free memory falls below 100MB, then the cache is automatically cleared (freeing memory).

Script Contents: clearcache.sh

#!/bin/bash

# Script help inspired by https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
ram_use=$(free -m)
IFS=

I set the cronjob to run every 15 mins, I added this to my cronjob.

SHELL=/bin/bash
*/15  *  *  *  *  root /bin/bash /scripts/clearcache.sh >> /scripts/clearcache.log

Sample log output

2018-06-10 01:13:22 RAM OK (Total: 1993 MB, Used: 981 MB, Free: 387 MB)
2018-06-10 01:15:01 RAM OK (Total: 1993 MB, Used: 974 MB, Free: 394 MB)
2018-06-10 01:20:01 RAM OK (Total: 1993 MB, Used: 955 MB, Free: 412 MB)
2018-06-10 01:25:01 RAM OK (Total: 1993 MB, Used: 1002 MB, Free: 363 MB)
2018-06-10 01:30:01 RAM OK (Total: 1993 MB, Used: 970 MB, Free: 394 MB)
2018-06-10 01:35:01 RAM OK (Total: 1993 MB, Used: 963 MB, Free: 400 MB)
2018-06-10 01:40:01 RAM OK (Total: 1993 MB, Used: 976 MB, Free: 387 MB)
2018-06-10 01:45:01 RAM OK (Total: 1993 MB, Used: 985 MB, Free: 377 MB)
2018-06-10 01:50:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 379 MB)
2018-06-10 01:55:01 RAM OK (Total: 1993 MB, Used: 979 MB, Free: 382 MB)
2018-06-10 02:00:01 RAM OK (Total: 1993 MB, Used: 980 MB, Free: 380 MB)
2018-06-10 02:05:01 RAM OK (Total: 1993 MB, Used: 971 MB, Free: 389 MB)
2018-06-10 02:10:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 376 MB)
2018-06-10 02:15:01 RAM OK (Total: 1993 MB, Used: 967 MB, Free: 392 MB)

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

sudo cat /proc/sys/vm/swappiness
1

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may lose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

tune2fs -o journal_data_writeback /dev/vda1

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

sudo nano /etc/fstab

I added “writeback,noatime,nodiratime” flags to my disk options.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options> <dump>  <pass>
# / was on /dev/vda1 during installation
#                <device>                 <dir>           <fs>    <options>                                             <dump>  <fsck>
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro,data=writeback,noatime,nodiratime   0       1

Updating Ubuntu Packages

Show updatable packages.

apt-get -s dist-upgrade | grep "^Inst"

Update Packages.

sudo apt-get update && sudo apt-get upgrade

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

sudo apt-get install unattended-upgrades

Enable Unattended Upgrades.

sudo dpkg-reconfigure --priority=low unattended-upgrades

Now I configure what packages not to auto-update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

Unattended-Upgrade::Package-Blacklist {
"nginx";
"nginx-common";
"nginx-core";
"php7.2";
"php7.2-fpm";
"mysql-server";
"mysql-server-5.7";
"mysql-server-core-5.7";
"libssl1.0.0";
"libssl1.1";
};

FYI: You can find installed packages by running this command:

apt list --installed

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

unattended-upgrade -d

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows creating democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepaid model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user-defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

Disable dasjhicons.min.css (for unauthenticated WordPress users).

Find functions.php in the www root

sudo find . -print |grep  functions.php

Edit functions.php

sudo nano ./wp-includes/functions.php

Add the following

// Remove dashicons in frontend for unauthenticated users
add_action( 'wp_enqueue_scripts', 'bs_dequeue_dashicons' );
function bs_dequeue_dashicons() {
    if ( ! is_user_logged_in() ) {
        wp_deregister_style( 'dashicons' );
    }
}

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers

server {
        root /www;

        ...
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;
        ...

I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

location = /http2/push_demo.html {
        http2_push /http2/pushed.css;
        http2_push /http2/pushedimage1.jpg;
        http2_push /http2/pushedimage2.jpg;
        http2_push /http2/pushedimage3.jpg;
}

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

location / {
        ...
        http2_push /https://fearby.com/wp-includes/js/jquery/jquery.js;
        http2_push /wp-content/themes/news-pro/images/favicon.ico;
        ...
}

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

WordPress wp-config.php tweaks

# Memory
define('WP_MEMORY_LIMIT','1024M');
define('WP_MAX_MEMORY_LIMIT','1024M');
set_time_limit (60);

# Security
define( 'FORCE_SSL_ADMIN', true);

# Disable Updates
define( 'WP_AUTO_UPDATE_CORE', false );
define( 'AUTOMATIC_UPDATER_DISABLED', true );

# ewww.io
define( 'WP_AUTO_UPDATE_CORE', false );

Add 2FA Authentication to server logins.

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress too.

Tweaks Todo

  • Compress placeholder BJ Lazy Load Image (plugin is broken)
  • Solve 2x Google Analytics tracker redirects (done, switched to Matomo)

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

Results

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

n' read -rd '' -a ram_use_arr <<< "$ram_use" ram_use="${ram_use_arr[1]}" ram_use=$(echo "$ram_use" | tr -s " ") IFS=' ' read -ra ram_use_arr <<< "$ram_use" ram_total="${ram_use_arr[1]}" ram_used="${ram_use_arr[2]}" ram_free="${ram_use_arr[3]}" d=`date '+%Y-%m-%d %H:%M:%S'` if ! [[ "$ram_free" =~ ^[0-9]+$ ]]; then echo "Sorry ram_free is not an integer" else if [ "$ram_free" -lt "100" ]; then echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB) - Clearing Cache..." sync; echo 1 > /proc/sys/vm/drop_caches sync; echo 2 > /proc/sys/vm/drop_caches #sync; echo 3 > /proc/sys/vm/drop_caches #Not advised in production # Read for more info https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ exit 1 else if [ "$ram_free" -lt "256" ]; then echo "$d RAM ALMOST LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else if [ "$ram_free" -lt "512" ]; then echo "$d RAM OK (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 fi fi fi fi

I set the cronjob to run every 15 mins, I added this to my cronjob.

 

Sample log output

 

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

 

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

 

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may loose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

 

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

 

I added “writeback,noatime,nodiratime” flags to my disk options.

 

Updating Ubuntu Packages

Show updatable packages.

 

Update Packages.

 

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

 

Enable Unattended Upgrades.

 

Now I configure what packages not to auto update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

 

FYI: You can find installed packages by running this command:

 

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

 

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows to create democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepay model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

2FA Authentication at login

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress aswel.

Performance

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

Results

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

I hope this guide helps someone.

Free Credit

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.2 Converting to Blocks

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

Filed Under: CDN, Cloud, Cloudflare, Cost, CPanel, Digital Ocean, DNS, Domain, ExactDN, Firewall, Hosting, HTTPS, MySQL, MySQLGUI, NGINX, Performance, PHP, php72, Scalability, TLS, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: draft, GTetrix, host, IOPS, Load Time, maxIOPS, MySQL, nginx, Page Speed Insights, Performance, php, SSD, ubuntu, UpCloud, vm

Updating NGINX to the development branch to get more frequent updates and features over the stable branch

November 20, 2018 by Simon

Updating NGINX to the development branch (on Ubuntu) to get more frequent updates and features over the stable branch

Aside

I have a number of guides on moving away from CPanel, Setting up VM’s on UpCloud, AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. View all recent posts here https://fearby.com/all/

Now on with the post

Warning

Backup your Nginx and Server before making any changes. The Nginx development branch is quite stable but anything can happen. If your site is mission-critical then stay on the stable branch.

Nginx Branches

By default, you will most likely get the stable branch of Nginx when instaling and updating Nginx.  I have been running the stable version for the last few years but was made aware of a DDoS vulnerability in Nginx.

Here is a good write-up on development merges into the stable branch.

Nginx Updates

Widely-used #Nginx server releases versions 1.15.6 and 1.14.1 to patch two HTTP/2 implementation vulnerabilities that might cause excessive memory consumption (CVE-2018-16843) & CPU usage (CVE-2018-16844), allowing a remote attacker to perform #DoS attackhttps://t.co/1Z3JoghoBr pic.twitter.com/qQ3pOFD1Lk

— The Hacker News (@TheHackersNews) November 9, 2018

I was aware recently of a DDoS bug affecting Nginx and the recommendation was to update ot Nginx 1.15.6 development branch (or 1.14.1 stable branch).

A few days ago no 1.14.1 update was available but a 1.15.6 was, should I switch to the development branch to get updates earlier?

Reminder to update your #nginx installations to the 1.14.1 stable or the 1.15.6 mainline versions for critical security patches released this week. #NGINXPlus customers, see instructions for updating based on the patch released 10/30 https://t.co/KitsOWIJkb

— NGINX, Inc. (@nginx) November 8, 2018

Recent Nginx Changes

Here are the recent changes to Nginx: http://nginx.org/en/CHANGES

Changes with nginx 1.15.6                                        06 Nov 2018

    *) Security: when using HTTP/2 a client might cause excessive memory
       consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844).

    *) Security: processing of a specially crafted mp4 file with the
       ngx_http_mp4_module might result in worker process memory disclosure
       (CVE-2018-16845).

    *) Feature: the "proxy_socket_keepalive", "fastcgi_socket_keepalive",
       "grpc_socket_keepalive", "memcached_socket_keepalive",
       "scgi_socket_keepalive", and "uwsgi_socket_keepalive" directives.

    *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL
       1.1.1, the TLS 1.3 protocol was always enabled.

    *) Bugfix: working with gRPC backends might result in excessive memory
       consumption.


Changes with nginx 1.15.5                                        02 Oct 2018

    *) Bugfix: a segmentation fault might occur in a worker process when
       using OpenSSL 1.1.0h or newer; the bug had appeared in 1.15.4.

    *) Bugfix: of minor potential bugs.


Changes with nginx 1.15.4                                        25 Sep 2018

    *) Feature: now the "ssl_early_data" directive can be used with OpenSSL.

    *) Bugfix: in the ngx_http_uwsgi_module.
       Thanks to Chris Caputo.

    *) Bugfix: connections with some gRPC backends might not be cached when
       using the "keepalive" directive.

    *) Bugfix: a socket leak might occur when using the "error_page"
       directive to redirect early request processing errors, notably errors
       with code 400.

    *) Bugfix: the "return" directive did not change the response code when
       returning errors if the request was redirected by the "error_page"
       directive.

    *) Bugfix: standard error pages and responses of the
       ngx_http_autoindex_module module used the "bgcolor" attribute, and
       might be displayed incorrectly when using custom color settings in
       browsers.
       Thanks to Nova DasSarma.

    *) Change: the logging level of the "no suitable key share" and "no
       suitable signature algorithm" SSL errors has been lowered from "crit"
       to "info".


Changes with nginx 1.15.3                                        28 Aug 2018

    *) Feature: now TLSv1.3 can be used with BoringSSL.

    *) Feature: the "ssl_early_data" directive, currently available with
       BoringSSL.

    *) Feature: the "keepalive_timeout" and "keepalive_requests" directives
       in the "upstream" block.

    *) Bugfix: the ngx_http_dav_module did not truncate destination file
       when copying a file over an existing one with the COPY method.

    *) Bugfix: the ngx_http_dav_module used zero access rights on the
       destination file and did not preserve file modification time when
       moving a file between different file systems with the MOVE method.

    *) Bugfix: the ngx_http_dav_module used default access rights when
       copying a file with the COPY method.

    *) Workaround: some clients might not work when using HTTP/2; the bug
       had appeared in 1.13.5.

    *) Bugfix: nginx could not be built with LibreSSL 2.8.0.


Changes with nginx 1.15.2                                        24 Jul 2018

    *) Feature: the $ssl_preread_protocol variable in the
       ngx_stream_ssl_preread_module.

    *) Feature: now when using the "reset_timedout_connection" directive
       nginx will reset connections being closed with the 444 code.

    *) Change: a logging level of the "http request", "https proxy request",
       "unsupported protocol", and "version too low" SSL errors has been
       lowered from "crit" to "info".

    *) Bugfix: DNS requests were not resent if initial sending of a request
       failed.

    *) Bugfix: the "reuseport" parameter of the "listen" directive was
       ignored if the number of worker processes was specified after the
       "listen" directive.

    *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to
       switch off "ssl_prefer_server_ciphers" in a virtual server if it was
       switched on in the default server.

    *) Bugfix: SSL session reuse with upstream servers did not work with the
       TLS 1.3 protocol.


Changes with nginx 1.15.1                                        03 Jul 2018

    *) Feature: the "random" directive inside the "upstream" block.

    *) Feature: improved performance when using the "hash" and "ip_hash"
       directives with the "zone" directive.

    *) Feature: the "reuseport" parameter of the "listen" directive now uses
       SO_REUSEPORT_LB on FreeBSD 12.

    *) Bugfix: HTTP/2 server push did not work if SSL was terminated by a
       proxy server in front of nginx.

    *) Bugfix: the "tcp_nopush" directive was always used on backend
       connections.

    *) Bugfix: sending a disk-buffered request body to a gRPC backend might
       fail.


Changes with nginx 1.15.0                                        05 Jun 2018

    *) Change: the "ssl" directive is deprecated; the "ssl" parameter of the
       "listen" directive should be used instead.

    *) Change: now nginx detects missing SSL certificates during
       configuration testing when using the "ssl" parameter of the "listen"
       directive.

    *) Feature: now the stream module can handle multiple incoming UDP
       datagrams from a client within a single session.

    *) Bugfix: it was possible to specify an incorrect response code in the
       "proxy_cache_valid" directive.

    *) Bugfix: nginx could not be built by gcc 8.1.

    *) Bugfix: logging to syslog stopped on local IP address changes.

    *) Bugfix: nginx could not be built by clang with CUDA SDK installed;
       the bug had appeared in 1.13.8.

    *) Bugfix: "getsockopt(TCP_FASTOPEN) ... failed" messages might appear
       in logs during binary upgrade when using unix domain listen sockets
       on FreeBSD.

    *) Bugfix: nginx could not be built on Fedora 28 Linux.

    *) Bugfix: request processing rate might exceed configured rate when
       using the "limit_req" directive.

    *) Bugfix: in handling of client addresses when using unix domain listen
       sockets to work with datagrams on Linux.

    *) Bugfix: in memory allocation error handling.

Development branch changes are made every few weeks and stable branch changes are made less often.

Updating Nginx

Normally you update Nginx bu running an update and upgrade

apt-get update && apt-get upgrade

Restart Nginx for good measure

/etc/init.d/nginx restart

Checking NGINX Version

nginx -v
nginx version: nginx/1.14.1

Changing your repository to the development branch

I changed ot the development branch by running

sudo add-apt-repository ppa:nginx/development

Update and upgrade Nginx

apt-get update && apt-get upgrade

Restart Nginx for good measure

/etc/init.d/nginx restart

Checking NGINX Version

nginx -v
nginx version: nginx/1.16.6

Removing the stable Nginx repository

Run this command to remove the stable branch of Nginx

sudo add-apt-repository -r ppa:nginx/stable

Check to see if the development branch is listed

grep -r --include '*.list' '^deb ' /etc/apt/sources.list* |grep nginx
/etc/apt/sources.list.d/nginx-ubuntu-development-bionic.list:deb http://ppa.launchpad.net/nginx/development/ubuntu bionic main

Good luck and I hope this guide helps someone

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.0 Initial post

Filed Under: Linux, Ubuntu Tagged With: and, Branch, development, features, Frequent, get, more, nginx, over, stable, the, to, to the, updates, Updating

Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx

July 17, 2018 by Simon

This is a quick post that shows how I set up the “Feature-Policy”, “Referrer-Policy” and “Content Security Policy” headers in Nginx to tighter security and privacy.

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Add a Feature Policy Header

Upon visiting https://securityheaders.com/ I found references to a Feature-Policy header (WC3 internet standard) that allows you to define what browse features you webpage can use along with other headers.

Google mentions the Feature-Policy header here.

Browser features that we can enable or block with feature-policy headers.

  • geolocation
  • midi
  • notifications
  • push
  • sync-xhr
  • microphone
  • camera
  • magnetometer
  • gyroscope
  • speaker
  • vibrate
  • fullscreen
  • payment

Feature Policy Values

  • * = The feature is allowed in documents in top-level browsing contexts by default, and when allowed, is allowed by default to documents in nested browsing contexts.
  • self = The feature is allowed in documents in top-level browsing contexts by default, and when allowed, is allowed by default to same-origin domain documents in nested browsing contexts, but is disallowed by default in cross-origin documents in nested browsing contexts.
  • none = The feature is disallowed in documents in top-level browsing contexts by default and is also disallowed by default to documents in nested browsing contexts.

My Final Feature Policy Header

I added this header to Nginx

sudo nano /etc/nginx/sites-available/default

This essentially disables all browser features when visitors access my site

add_header Feature-Policy "geolocation none;midi none;notifications none;push none;sync-xhr none;microphone none;camera none;magnetometer none;gyroscope none;speaker self;vibrate none;fullscreen self;payment none;";

I reloaded Nginx config and restart Nginx

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Feature-Policy Results

I verified my feature-policy header with https://securityheaders.com/

Feature Policy score from https://securityheaders.com/?q=fearby.com&followRedirects=on

Nice, Feature -Policy is now enabled.

Now I need to enable the following headers

  • Content-Security-Policy (read more here)
  • Referer-Policy (read more here)

Add a Referrer-Policy Header

I added this header configuration in Nginx to prevent referrers being leaked over insecure protocols.

add_header Referrer-Policy "no-referrer-when-downgrade";

Referrer-Policy Results

Again, I verified my referrer policy header with https://securityheaders.com/

Referrer Policy resu;ts from https://securityheaders.com/?q=fearby.com&followRedirects=on

Done, now I just need to setup Content Security Policy.

Add a Content Security Policy header

I read my old guide on Beyond SSL with Content Security Policy, Public Key Pinning etc before setting up a Content Security policy again (I had disabled it a while ago). Setting a fully working CSP is very complex and if you don’t want to review CSP errors and modify the CSP over time this may not be for you.

Read more about Content Security Policy here: https://content-security-policy.com/

I added my old CSP to Nginx

> add_header Content-Security-Policy "default-src 'self'; frame-ancestors 'self'; script-src 'self' 'unsafe-inline' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; style-src 'self' 'unsafe-inline' https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; img-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:*; font-src 'self' data: https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://fonts.gstatic.com:* https://cdn.joinhoney.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; connect-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; media-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:*; child-src 'self' https://player.vimeo.com https://fearby-com.exactdn.com:* https://www.youtube.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; form-action 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://fearby-com.exactdn.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; " always;

I then imported the CSP into https://report-uri.com/home/generate and enabled more recent CSP values.

add_header Content-Security-Policy "default-src 'self' ; script-src * 'self' data: 'unsafe-inline' 'unsafe-eval' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:* https://pagead2.googlesyndication.com:* https://www.youtube.com:* https://adservice.google.com.au:* https://s.ytimg.com:* about; style-src 'self' data: 'unsafe-inline' https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; img-src 'self' data: https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:* https://a.impactradius-go.com:* https://www.paypalobjects.com:* https://namecheap.pxf.io:* https://www.paypalobjects.com:* https://stats.g.doubleclick.net:* https://*.doubleclick.net:* https://stats.g.doubleclick.net:* https://www.ojrq.net:* https://ak1s.abmr.net:* https://*.abmr.net:*; font-src 'self' data: https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://fonts.gstatic.com:* https://cdn.joinhoney.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:* https://googleads.g.doubleclick.net:*; connect-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; media-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:*; object-src 'self' ; child-src 'self' https://player.vimeo.com https://fearby-com.exactdn.com:* https://www.youtube.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; frame-src 'self' https://www.youtube.com:* https://googleads.g.doubleclick.net:* https://*doubleclick.net; worker-src 'self' ; frame-ancestors 'self' ; form-action 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://fearby-com.exactdn.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:* https://www.google-analytics.com:*; upgrade-insecure-requests; block-all-mixed-content; disown-opener; reflected-xss block; base-uri https://fearby.com:*; manifest-src 'self' 'self' 'self'; referrer no-referrer-when-downgrade; report-uri https://fearby.report-uri.com/r/d/csp/enforce;" always;

I restarted Nginx

nginx -t
nginx -s reload
/etc/init.d/nginx restart

I loaded the Google Developer Console to see any CSP errors when loading my site.

CPS Errors

I enabled reporting of CSP errors to https://fearby.report-uri.com/r/d/csp/enforce

Fyi: Content Security Policy OWASP Cheat Sheet.

You can validate CSP with https://cspvalidator.org

Now I won’t have to check my Chrome Developer Console and visitors to my site will report errors. I can see my site’s visitors CSP errors at https://report-uri.com/

report-cri.com Report

Content Security Policy Results

I reviewed the reported errors and made some more CSP changes. I will continue to lock down my CSP and make more changes before making this CSP policy live.

I verified my header with https://securityheaders.com/

Security Headers report from https://securityheaders.com/?q=https%3A%2F%2Ffearby.com&followRedirects=on

Testing Policies

TIP: Use the header name of “Content-Security-Policy-Report-Only” instead of “Content-Security-Policy” to report errors before making CSP changes live.

I did not want to go live too soon, I had issues with some WordPress plugins not working in the WordPress admins screens.

Reviewing Errors

Do check your reported errors and update your CSP often, I had a post with a load of Twitter-related errors.

Do check report-uri errors.

I hope this guide helps someone.

Please consider using my referral code and get $25 UpCloud VM credit if you need to create a server online.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.3 https://cspvalidator.org

v1.2 OWASP Cheat Sheet.

v1.1 added info on WordPress errors.

v1.0 Initial Post

Filed Under: Audit, Cloud, Content Security Policy, Development, Feature-Policy, HTTPS, NGINX, Referrer-Policy, Security, Ubuntu Tagged With: Content Security Policy, CSP, Feature-Policy, nginx, Referrer-Policy, security

Adding two sub domains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

June 27, 2018 by Simon

Here is how I added two subdomains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

UpCloud performance is great.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Goal(s)

Setup 2x subdomains on https://fearby.com

– Sub Domain #1: https://test.fearby.com (pointing to a dedicated UpCloud VM in Singapore for testing).

– Sub Domain #2: https://audit.fearby.com (pointing to a sub-website on the NGINX/VM that runs https://fearby.com )

Let’s set up the first Sub Domain (dedicated VM) and SSL

Backup

Do back up your server first.

VM

I created a second server ($5 month or $0.07c hour 1,024MB Memory, 25GB Disk, 1024 GB Month Data Transfer) at UpCloud. If you don’t already have an account at UpCloud use this link to signup and get $25 free credit ( https://www.upcloud.com/register/?promo=D84793 ). Read my blog post on why UpCloud is awesome and how I moved my domain to UpCloud.

Once I spun up a second server I obtained the IPv4 and IPv6 IP addresses of the new “test” VM from the UpCloud dashboard.

IPV4 IP: 94.237.65.54
IPV6 IP: 2a04:3543:1000:2310:24b7:7cff:fe92:468c

DNS

These DNS records were already in place with my DNS provider (Cloudflare).

A fearby.com 209.50.48.88
AAAA fearby.com 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added these DNS records for the subdomains.

I added a new A NAME record for the new shared NGINX subdomain (for https://audit.fearby.com), this subdomain will be a sub-website that is running off the same server as https://fearby.com

A audit 209.50.48.88
AAAA audit 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added another set of records for the new dedicated VM  subdomain (for https://test.fearby.com)

A test 94.237.65.54
AAAA test 2a04:3543:1000:2310:24b7:7cff:fe92:468c

I waited for DNS to replicate around the globe by watching https://www.whatsmydns.net/

Setup a Firewall

On the new dedicated https://test.fearby.com VM, I installed the ufw firewall.

sudo apt-get install ufw

I configured the firewall to allow minimum ports (and added whitelisted IP for port 22 and added UpCloud DNS servers). I will lock this down some more later.

TIP: If your ISP does not offer a dedicated IP try a VPN. I use https://cyberghostvpn.com on OSX and Android.

Firewall rules.

sudo ufw status numbered

     To                         Action      From
     --                         ------      ----
[ 1] 22                         ALLOW IN    x.x.x.x
[ 2] 80                         ALLOW IN    Anywhere
[ 3] 443                        ALLOW IN    Anywhere
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 25                         DENY IN     Anywhere
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 53                         ALLOW IN    2a04:3540:53::1
[10] 53                         ALLOW IN    2a04:3544:53::1
[11] 22                         ALLOW IN    x.x.x.x.x.x.x.x.x
[12] 25 (v6)                    DENY IN     Anywhere (v6)

I enabled the firewall.

sudo ufw enable

Install NGINX (on https://test.fearby.com)

On the new dedicated https://test.fearby.com VM I…

Created a new www root

mkdir /www-root

Set permissions

sudo chown -R www-data:www-data /www-root

Installed NGINX

sudo apt-get update
sudo apt-get install nginx

I created a placeholder webpage

sudo nano /www-root/index.html

Configured the root value in /etc/nginx/sites-available/default

Created a symbolic link of the nginx config

sudo ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default

Lets Encrypt SSL

I have previously setup Lets encrypt on Ubuntu 16.04 but not 18.04. Certbot had info on setting up Lets Encrypt for 14.x 16.x and 17.x but not 18.x

Full credit for the SSL steps goes to @Linuxize ( tips on setting up Lets Encrypt on Ubuntu 18.04 ). Check out https://linuxize.com/

I installed Lets Encrypt certbot

sudo apt update
sudo apt install certbot

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Map requests to http://test.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

Create a /etc/nginx/snippets/letsencrypt.conf on http://test.fearby.com and enforce the redirect.

location ^~ /.well-known/acme-challenge/ {
  allow all;
  root /var/lib/letsencrypt/;
  default_type "text/plain";
  try_files $uri =404;
}

Create a /etc/nginx/snippets/ssl.conf file on http://test.fearby.com

ssl_dhparam /etc/ssl/certs/dhparam.pem;

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 30s;

add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d test.fearby.com

Certificates have been created 🙂

ls -al /etc/letsencrypt/live/test.fearby.com/
total 12
drwxr-xr-x 2 user user 4096 Jun 26 11:30 .
drwx------ 3 user user 4096 Jun 26 11:30 ..
-rw-r--r-- 1 user user  543 Jun 26 11:30 README
lrwxrwxrwx 1 user user   39 Jun 26 11:30 cert.pem -> ../../archive/test.fearby.com/cert1.pem
lrwxrwxrwx 1 user user   40 Jun 26 11:30 chain.pem -> ../../archive/test.fearby.com/chain1.pem
lrwxrwxrwx 1 user user   44 Jun 26 11:30 fullchain.pem -> ../../archive/test.fearby.com/fullchain1.pem
lrwxrwxrwx 1 user user   42 Jun 26 11:30 privkey.pem -> ../../archive/test.fearby.com/privkey1.pem

Now lets edit “/etc/nginx/sites-available/default” on https://test.fearby.com VM and add the cert paths.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        ssl_certificate /etc/letsencrypt/live/test.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/test.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/test.fearby.com/chain.pem;

        include snippets/ssl.conf;

        #ssl_stapling on; # Requires nginx >= 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        root /www-root/;

        include snippets/letsencrypt.conf;

        index index.html;

        server_name test.fearby.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

Now let’s setup the second subdomain (subsite off https://fearby.com) and SSL

VM

I already have NGINX on https://fearby.com set up a second site.

DNS

We have already set up a DNS record for https://audit.fearby.com (above)

Firewall

Already configured at https://fearby.com

SSL

Because I had an existing Comodo certificate on https://fearby.com I am going to repeat the steps above to generate a new certificate but save the NGINX config to /etc/nginx/sites-available/audit.fearby.com (this activates the second site)

TIP: Follow the Linuxize guide here (for creating ssl.conf, letsencrypt.conf etc config files), Do a backup and restore if need be.

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d audit.fearby.com

Configure NGINX

Map requests to http://audit.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

I created a new NGINX site ( /etc/nginx/sites-available/audit.fearby.com )

#proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-audit-root;

        # Listen Ports
        listen 80;
        listen [::]:80;
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name audit.fearby.com;

        include snippets/letsencrypt.conf;

        location / {
                try_files $uri $uri/ =404;
        }

        ssl_certificate /etc/letsencrypt/live/audit.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/audit.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/audit.fearby.com/chain.pem;

        ssl_dhparam /etc/ssl/certs/auditdhparam.pem;

        ssl_session_timeout 1d;
        #ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA38$

        ssl_prefer_server_ciphers on;

        ssl_stapling on;
        ssl_stapling_verify on;

        #resolver 8.8.8.8 8.8.4.4 valid=300s;
        #resolver_timeout 30s;

        add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }
}

I created a symbolic link of the config file

sudo ln -s /etc/nginx/sites-available/audit.fearby.com /etc/nginx/sites-enabled/audit.fearby.com

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

How to test the certificate renewal

sudo certbot renew --dry-run

Automate the renewal in crontab (every 12 hours)

I set this crontab entry up on https://fearby.com and https://test.fearby.com

crontab -e
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

Conclusion

Yes, I haVe 2 subdomains (1x dedicated VM and the other is a sub-website off an existing server) with SSL certificates.

Ping Results

ping -c 4 fearby.com
PING fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=220.000 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=290.602 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=311.938 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=330.841 ms

--- fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 220.000/288.345/330.841/41.948 ms

ping -c 4 test.fearby.com
PING test.fearby.com (94.237.65.54): 56 data bytes
64 bytes from 94.237.65.54: icmp_seq=0 ttl=44 time=333.590 ms
64 bytes from 94.237.65.54: icmp_seq=1 ttl=44 time=252.433 ms
64 bytes from 94.237.65.54: icmp_seq=2 ttl=44 time=271.153 ms
64 bytes from 94.237.65.54: icmp_seq=3 ttl=44 time=292.685 ms

--- test.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 252.433/287.465/333.590/30.200 ms

ping -c 4 audit.fearby.com
PING audit.fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=281.662 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=307.676 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=227.985 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=215.566 ms

--- audit.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 215.566/258.222/307.676/37.845 ms

Webpage Results

Screenshow showing the main site and 2 subdomains in a web browser

Troubleshooting

If you are having troubles generating the initial certificate check that you have not blocked port 80 and don’t have “Strict-Transport-Security” heavers enabled.

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d g
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for yoursubdomain.domain.com
Using the webroot path /var/lib/letsencrypt for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. yoursubdomain.domain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficzLlmg_w6Tc: q%!(EXTRA string=<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>)

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: yoursubdomain.domain.com
   Type:   unauthorized
   Detail: Invalid response from
   http://yoursubdomain.domain.com/.well-known/acme-challenge/_QA3jblEydx5mE8I8OdRsd2EdHIj4R-przLlmg_w6Tc:
   q%!(EXTRA string=<html>
   <head><title>404 Not Found</title></head>
   <body bgcolor="white">
   <center><h1>404 Not Found</h1></center>
   <hr><center>)

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A record(s) for that domain
   contain(s) the right IP address.

I re-ran the certbot command but pointed to the real /www-root (not/var/lib/letsencrypt/)

Create a new

mkdir /www-root/.well-known/
mkdir /www-root/.well-known/acme-challenge/
sudo certbot certonly --agree-tos --email [email protected] --webroot -w /www-root -d yoursubdomain.domain.com

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.1 Troubleshooting

v1.0 Initial Post

Filed Under: Linux, NGINX, ssl, Subdomain, Ubuntu, UpCloud, VM, Website Tagged With: a, Adding, an, and, domains, new, nginx, on, one, other, pointing, sub, subsite, the, to, two, Ubuntu 18.04, UpCloud, vm

Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare

April 5, 2018 by Simon

This guide will show you how to enable the latest Transport Layer Security (TLS) 1.3 protocol with it’s predecessor Secure Sockets Layer (SSL) with NGINX and OpenSSL for better website security on an Ubuntu 16.04 server

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. Making sure your server is up to date and running the latest SSL software is important. I have updated Open SSL before and blogged about this here.  Do back up your server before changing settings and if you use  Cloudflare (if you don’t do it now) enable Development Mode (and disable caching until changes are made).

For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

TLS 1.3 is the latest SSL security protocol that can be used between clients and servers to encrypt connections on the web.

TLS 1.3 uptake is only 60% according to https://caniuse.com/#search=TLS%201.3

TLS 1.3

Read why TLS 1.3 is important and news on TLS 1.3 can be found here: https://www.openssl.org/blog/blog/2018/02/08/tlsv1.3/

The Good and Bad

Done be like this commercial site with very poor security (tested with SSL labs and asafaweb)

Bad SSL

Here is what the top 1 million sites do

Here it is!! Alexa Top 1 Million Analysis – February 2018 https://t.co/TjBHNX7zTi

— Scott Helme (@Scott_Helme) February 26, 2018

Installing Open SSL on Ubuntu

Connect to your Ubuntu 16.04 server via SSH (I connected to my Vultr server)

Check what version of OpenSSL you have? My OpenSSL is out of date.

# openssl version
OpenSSL 1.1.0g  2 Nov 2017

Tip: What Ciphers does your Open SSL Support?

openssl ciphers -s -v
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH     Au=RSA  Enc=CHACHA20/POLY1305(256) Mac=AEAD
DHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=DH       Au=RSA  Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA384
ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA384
DHE-RSA-AES256-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA256
ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA256
ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA256
DHE-RSA-AES128-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA256
ECDHE-ECDSA-AES256-SHA  TLSv1 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA1
ECDHE-RSA-AES256-SHA    TLSv1 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
ECDHE-ECDSA-AES128-SHA  TLSv1 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA1
ECDHE-RSA-AES128-SHA    TLSv1 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1
AES256-GCM-SHA384       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(256) Mac=AEAD
AES128-GCM-SHA256       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(128) Mac=AEAD
AES256-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA256
AES128-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA256
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1

Time to update Open SSL

OpenSSL 1.1.1 beta is available and supports TLS 1.3  but it is n BETA form.  OpenSSL code is available here.

I did the following to download and build the latest version of OpenSSL.

mkdir /openssltemp
cd /openssltemp
sudo git clone git://git.openssl.org/openssl.git
cd openssl/
./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl -Wl,-rpath,/usr/local/ssl/lib
make
sudo make install

I tried to check the open SSL version but had an error?

openssl version 
openssl: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: version `OPENSSL_1_1_1' not found (required by openssl)
openssl: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: version `OPENSSL_1_1_1' not found (required by openssl)

A quick GitHub ticket revealed I needed to set a path variable.

export LD_LIBRARY_PATH=/usr/local/lib
echo "export LD_LIBRARY_PATH=/usr/local/bin/openssl" >> ~/.bashrc

Open SSL now reports it’s version.

openssl version
OpenSSL 1.1.1-pre3 (beta) 20 Mar 2018

What version NGINX do you have (1.13 supports TLS 1.3) read here

# nginx -v
nginx version: nginx/1.13.9

Backup your NGINX

Do backup your server files and take a snapshot if need be.  I am not responsible;e for a broken server,

sudo cp -R /etc/nginx/ /nginx-backup-26thMar-2018

Edit NGINX Configuration

Update NGINX configuration: /etc/nginx/sites-available/default

ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ecdh_curve secp384r1;

tip: Review other NGINX hardening settings here.  Also remove TLSv1.0

I tested my NGINX config loaded them and restarted NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Check the status of NGINX

# /etc/init.d/nginx status

[ ok ] Restarting nginx (via systemctl): nginx.service.
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) 
     Docs: man:nginx(8)
  Process: 15154 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
  Process: 15162 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
  Process: 15159 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
 Main PID: 15166 (nginx)
    Tasks: 4
   Memory: 2.3M
      CPU: 27ms
   CGroup: /system.slice/nginx.service
           ├─15166 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           ├─15170 nginx: worker process
           ├─15171 nginx: cache manager process
           └─15172 nginx: cache loader process

If you have configured Cloudflare then log in and enable TLS support.

Cloudflare TLS Settings

Enable TLS 1.3 in Chrome by visiting chrome://flags/#tls13-variant This should be automatic in later versions of Chrome and other browsers.

Enable TLS in Chrome

Verify TLS

I used the developer tools in Chrome to confirm the page was verified in TLS 1.3.

Verify TLS

Updated to 1.1.1-pre6-dev

mkdir /temp
cd /temp
sudo git clone https://github.com/openssl/openssl.git
cd openssl/
./config --prefix=/usr/local --openssldir=/usr/local -Wl,-rpath,/usr/local
make
sudo make install
openssl
OpenSSL> version
OpenSSL 1.1.1-pre6-dev  xx XXX xxxx
OpenSSL> exit

Don’t forget to test your SSL strength with https://dev.ssllabs.com/ssltest/

SSL Test 2018

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.4 fixed typo

v1.3 added bad ssl cert.

v1.2 ssl test v1.1 updated to 1.1.1-pre6-dev

v1.0 Initial post

Filed Under: ssl Tagged With: 16.04, a, an, Cloudflare, Enabling, is, nginx, on, server, ssl, that, TLS 1.3, ubuntu, Using, website

How to setup a twitter feed API endpoint in NodeJS with NGINX, Ruby and T etc

September 17, 2017 by Simon

This is how I set up a twitter feed API endpoint in NodeJS with NGINX, Ruby and T etc. This can be used to integrate SEO or automate Twitter post analytics or automate posts to twitter.

This guide assumes you have an Ubuntu server, nodejs (nginx) and setup ruby, rails and t twitter command line utility setup.  Also, you will need to have an API setup and working

Ensure your API is configured in NGINX /etc/nginx/sites-available/default

server {
...
	location /api/appname/v1 {
		#rewrite ^/api(.*) /$1 break;
		proxy_set_header X-Real-IP $remote_addr;	
		# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-For $remote_addr;
		proxy_http_version  1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection $connection_upgrade;
		proxy_set_header Proxy "";
		proxy_set_header Host $http_host;
		root /code/nodejs/appname;
		proxy_pass http://appnmaeapi;
	}
	...
}

I set up a placeholder GET routine in my node process that we will be used to communicate with the Twitter command line utility.

app.get('/api/appname/v1/feed/twitter/',function(req,res){
	var data = { "Feed":"" };
	data["Feed"] = "Working v0.1";
	res.json(data);
});

I started 1 instance of the API on a server with pm2 linked to https://keymetrics.io/.

sudo pm2 start /noejs/app01/app.js -i 1 --name "appname"

View API Status with PM2

sudo pm2 status
● Agent Online | Dashboard Access: https://app.keymetrics.io/#/r/######### | Server name:appname-######
┌────────────┬────┬─────────┬──────┬────────┬─────────┬────────┬─────┬───────────┬──────┬──────────┐
│ App name   │ id │ mode    │ pid  │ status │ restart │ uptime │ cpu │ mem       │ user │ watching │
├────────────┼────┼─────────┼──────┼────────┼─────────┼────────┼─────┼───────────┼──────┼──────────┤
│appname │ 0  │ cluster │ 5480 │ online │ 0       │ 87s    │ 0%  │ 35.1 MB   │ apiuser │ disabled │

You can restart the API manually with PM2 by typing

sudo pm2 restart "appname"

I like to use https://paw.cloud/ software to test API’s over say Postman, I can easily see the test placeholder API is working.Node API

The browser works for now but as soon as I add authentication the browser will fail, Paw FTW.

API Test

A simple way to authenticate the API is for the caller to pass a known token when requesting the page via an HTTP POST packet.

app.post('/api/appname/v1/feed/twitter/',function(req,res){
	var data = { "Status":"" };
	var wwxapi = req.headers['x-access-token'];
	var data = { "Feed":"" };
	var errData = "";
	if (wwxapi == undefined) {
		console.log(" 499 token required");
		errData = {"Status": "499", "Error": "valid token required"};
		res.statusCode = 499;
		res.send(errData);
		res.end();
	} else {
		var xpikeypass = false;
		// Check the API Key against known API Keys
		if (wwxapi == "appname-##############################") { xpikeypass = true;} // Debug API Key
		if (xpikeypass == true) {
			//  app request validated

			data["Feed"] = "Twitter Feed API Working v0.2";
			res.json(data);

		} else {
			console.log("499 token required");
			errData = {"Status": "499", "Error": "valid token required"};
			res.statusCode = 409;
			res.send(errData);	
			res.end();
		}
	}
});

Don’t forget to restart the API after changes

sudo pm2 restart "appname"

Now we can change the API endpoint to a POST (instead of GET) and paste the code above and test it.  If we pass an invalid “x-access-token” value the API will return error 409 as designed.

API Auth

Passing the right “x-access-token” will return data (200 OK).

API Working

Now we can link the Twitter command line utility with the NodeJS API.

app.post('/api/appname/v1/feed/twitter/',function(req,res){
	var data = { "Status":"" };
	var wwxapi = req.headers['x-access-token'];
	var data = { "Feed":"" };
	var errData = "";
	if (wwxapi == undefined) {
		console.log(" 499 token required");
		errData = {"Status": "499", "Error": "valid token required"};
		res.statusCode = 499;
		res.send(errData);
		res.end();
	} else {
		var xpikeypass = false;
		// Check the API Key against known API Keys
		if (wwxapi == "appname-##############################") { xpikeypass = true;} // Debug API Key
		if (xpikeypass == true) {
			//  app request validated

			var result = require('child_process').execSync('/theuserwithrubyinstalled/.rbenv/shims/t search all "lang:en nodejs" --csv').toString();
			data["Feed"] = result;

			res.json(data);

		} else {
			console.log("499 token required");
			errData = {"Status": "499", "Error": "valid token required"};
			res.statusCode = 409;
			res.send(errData);	
			res.end();
		}
	}
});

Now we can test the NodeJS API and have is spawn a subprocess and hit Twitter. In this case, we are finding the latest posts on Twitter with NodeJS in them.

Works a treat 🙂

Up next 

  • Schedule getting data from the Twitter command line utility in crontab and save to Redis
  • Process the CSV and save new Twitter changes to Redis
  • Query Redis and send returned data back to the API.
  • Change the API to read from Redis in NodeJ.S
  • Secure.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.2 Works

Filed Under: NGINX, NodeJS, Ruby, Twitter Tagged With: nginx, Node, query, ruby, T, twitter

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT