• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

MySQL

HomePi – Raspberry PI powered touch screen showing information from house-wide sensors

March 14, 2022 by Simon

This post is a work in progress (14/3/2022, v0.9.63 – PCB’s v0.2 Designed and Ordered

Summary

After watching this video from Jeff Geerling (demonstrating how to build a Air Quality Sensor) I have decided to make 2. but why not build something bigger?

I want to make a RaspBerry Pi server with a touch screen to receive data from a dozen other WeMos Sensors that I will build.

The Plan

Below is a rough plan of what I am building

In a nutshell, it will be 20x WeMos Sensors recording

Picture of20x WeMoss Sensors, weather station and co2 sensors talking to an api that saves to MySQL then mysql being ready buy a webpage and touch screen panel

I ordered all the parts from Amazon, BangGood, AliExpress, eBay, Core Electronics and Kogan.

Fresh Bullseye Install (Buster upgrade failed)

On 21/11/2021 I tried to manually update Buster to Bullseye (without taking a backup first (bad idea)). I followed this guide to reinstall Rasbian from scratch (this with Bullseye)

Storage Type

Before I begin I need to decide on what storage media to use on the Raspberry Pi. I hate how unreliable and slow MicroSD cards. I tried using an old 128GB SATA SSD, a 1TB Magnetic Hard Drive, a SATA M.2 SSD and NVME M.2 in a USB caddy.

I decided to use a spare 250GB SATA based M.2 Solid State from my son’s PC in Geekworm X862 SATA M.21 Expansion board.

With this board I can bolt the M.2 Solid State Drive into a expansion board under the pi and Power it from the RaspBerry Pi USB Port.

Nice and tidy

I zip-tied a fan to the side of the boards to add a little extra airflow over the solid state drive

32Bit, 64Bit, Linux or Windows

Before I begin I set up Raspbian on an empty Micro SD card (just to boot it up and flash the firmware to the latest version). This is very easy and documented elsewhere. I needed the latest firmware to ensure boort from USB Drive (not Micro SD card was working).

I ran rpi-update and flashed the latest firmware onto my Raspberry Pi. Good, write up here.

When my Raspberry Pi had the latest firmware I used the Raspberry Pi Imager to install the 32 Bit Raspberry Pi OS.

I do have a 8GB Raspberry Pi 4 B, 64Bit Operating Systems do exist but I stuck with 32 bit for compatibility.

Ubuntu 64bit for Raspberry Pi Links

  • Install Ubuntu on a Raspberry Pi | Ubuntu
    • Server Setup Links
      • How to install Ubuntu Server on your Raspberry Pi | Ubuntu
    • Desktop Setup Links
      • How to install Ubuntu Desktop on Raspberry Pi 4 | Ubuntu

Windows 10 for Raspberry Pi Links
https://docs.microsoft.com/en-us/windows/iot-core/tutorials/quickstarter/prototypeboards
https://docs.microsoft.com/en-us/answers/questions/492917/how-to-install-windows-10-iot-core-on-raspberry-pi.html
https://docs.microsoft.com/en-us/windows/iot/iot-enterprise/getting_started

Windows 11 for Raspberry Pi Links
https://www.youtube.com/user/leepspvideo
https://www.youtube.com/watch?v=WqFr56oohCE
https://www.worproject.ml

Setting up the Raspberry Pi Server

These are the steps I used to setup my Pi

Dedicated IP

Before I began I ran ifconfig on my Pi to obtain my Raspberry Pi’s wireless cards mac address. I logged into my Router and setup a dedicated IP (192.168.0.50), this way I can have a IP address thta remains the same.

Hostname

I set my hostname here

sudo nano /etc/hosts
sudo nano /etc/hostname

I verified my hostname with this command

hostname

I verified my IP address with this command

hostname -I

Samba Share

I setup the Samba service to allow me to copy files to and from the Pi

sudo apt-get install samba samba-common-bin
sudo apt-get update

I made a folder to share files

 mkdir ~/share

I edited the Samba config file

sudo nano /etc/samba/smb.conf

In the config file I set my workgroup settings


workgroup = Hyrule
wins support = yes

I defined a share at the bottom of the config file (and saved)

[PiShare]
comment=Raspberry Pi Share
path=/home/pi/share
browseable=Yes
writeable=Yes
only guest=no
create mask=0777
directory mask=0777
public=no

I set a smb password

sudo smbpasswd -a pi
New SMB password: ********
Retype new SMB password: ********

I tested the share froma Windows PC

And the share is accessible on the Raspberry Pi

Great, now I can share files with drag and drop (instead of via SCP)

Mono

I know how to code C# Windows Executables, I have 25 years experince. I do nt want to learn Java or Python to code a GUI application for a touch screen if possible.

I setup Mono from Home | Mono (mono-project.com) to be anbe to run Windows C# EXE’s on Rasbian

sudo apt install apt-transport-https dirmngr gnupg ca-certificates

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb https://download.mono-project.com/repo/debian stable-raspbianbuster main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list

sudo apt update

sudo apt install mono-devel

I copied an EXE I wrote in C# on Windows and ran it with Mono

sudo mono ~/HelloWorld.exe
Exe Test OK

This worked.

Nginx Web Server

I Installed NginX and configured it

sudo apt-get install nginx

I created a /www folder for nginx

sudo mkdir /www

I created a place-holder file in the www root

sudo nano /wwww/index.html

I set permissions to allow Nginx to access /www

sudo chown -R www-data:www-data /www

I edited the NginX config as required

sudo nano /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/nginx.conf 

I tested and reloaded the nginx config


sudo nginx -t
sudo nginx -s reload
sudo systemctl start nginx

I started NginX

sudo systemctl start nginx

I tested nginx in a web browser

NodeJS/NPM

I installed NodeJS

sudo apt update
sudo apt install nodejs npm -y

I verified Node was installed

nodejs --version
> v12.22.5

PHP

I installed PHP

sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update

sudo apt install -y php8.0-common php8.0-cli php8.0-xml

I verified PHP with this command

php --version

> PHP 8.0.13 (cli) (built: Nov 19 2021 06:40:53) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.13, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.13, Copyright (c), by Zend Technologies

I installed PHP-FPM

sudo apt-get install php8.0-fpm

I verified the PHP FPM sock was available before adding it to the NGINX Config

sudo ls /var/run/php*/**.sock
> /var/run/php/php8.0-fpm.sock  /var/run/php/php-fpm.sock

I reviewed PHP Settings

sudo nano /etc/php/8.0/cli/php.ini 
sudo nano /etc/php/8.0/fpm/php.ini

I created a /www/ppp.php file with this contents

<?php
  phpinfo(); // all info
  // module info, phpinfo(8) identical
  phpinfo(INFO_MODULES);
?>

PHP is working

PHP Test OK

I changed these php.ini settings (fine for local development).

max_input_vars = 1000
memory_limit = 1024M
max_file_uploads = 20M
post_max_size = 20M
display_errors = on

MySQL Database

I installed MariaDB

sudo apt install mariadb-server

I updated my pi password

passwd

I ran the Secure MariaDB Program

sudo mysql_secure_installation

After setting each setting I want to run mysql as root to test mysql

PHPMyAdmin

I installed phpmyadmin to be able to edit mysql databases via the web

I followed this guide to setup phpmyadmin via lighthttp and then via nginx

I then logged into MySQL, set user permissions, create a test database and changes settings as required.

NginX to NodeJS API Proxy

I edited my NginX config to create a NodeAPI Proxy

Test Webpage/API

Todo

I installed PM2 the NodeJS agent software

sudo npm install -g pm2 

Node apps can be started as a service from cli

pm2 start api_v1.js

PM2 status

pm2 status

You can delete node apps from PM2 (if desired)

pm2 delete api_v1.js

Sending Email from CLI

I setup send email to allow emails to be sent from the cli with these commands

 sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail  

I logged into my GSuite account and setup an alias and app password to use.

Now I can send emails from the CLI

sudo sendemail -f [email protected] -t [email protected] -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp **************

I added this to a Bash script (“/Scripts/Up.sh”) and added an event to send an email every 6 hours

7 Inch Full View LCD IPS Touch Screen 1024*600 

I purchased a 7″ Touch screen from Banggood. I got a head up from the following Video.

I plugged in the touch USB cable to my Pi’s USB3 port. I pliugged the HDMI adapter into the screen and the pi (with the supplied mini plug).

I turned on the pi and it work’s and looks amazing.

This is displaying a demo C# app I wrote. It’s running via mono.

I did have to add the following to config.txt to bet native resolution. The manual on the supplied CD was helpful (but I did not check it at first).

max_usb_current=1
hdmi_force_hotplug=1
config_hdmi_boost=7
hdmi_group=2
hdmi_mode=1
hdmi_mode=87 
hdmi_drive=1
display_rotate=0
hdmi_cvt 1024 600 60 6 0 0 0
framebuffer_width=1024
framebuffer_height=600

PiJuice UPS HAT

I purchased an external LiPi UPS to keep the raspberry pi fed with power (even when the power goes out)

The stock battery was not charged and was quite weak when I first installed it. Do fully charge the battery before testing.

PiJuice

Stock Battery = 3.7V @ 1820mAh

Stock Battery = 3.7V @ 1820mAh

Below are screenshots so the PIJuice Setup.

PiJuice HAT Settings

PiJuice General Settings

General Settings

There is an option to set events for every button

Extensive screen to set button events

LED Status color and function

Set LED status and color

IO for the PiJuice Input. I will sort this out later.

PiJuice IO settings

A new firmware was available. I had v1.4

Update firmware screen

I updated the firmware

Firmware update worked

Firmware flash success

Battery settings

Battery Settings

PiJuice Button Config

Button config

Wake Up Alarm time and RTC

Clock Settings

System Settings

System Settings

System Events

system settings page

User Scripts

Define user scripts

I ordered a bigger battery as my Screen, M.2, Fan and UPS consume near the maximum of the stock battery.

10,000mAh battery

After talking with the seller of the battery they advised I setup the 10,000mAh battery using the 1,000mAh battery setup in PiJuice but change the Capacity and Charge Current

  • Capacity = 10000C
  •  cutoff voltage

And for battery longevity set the 

  • Cutoff voltage: 3,250mv

Final Battery Setup

Battery settings based off 1000mAh battery profile , Capacity 10,000 mAh, Charge current 850 and Cutoff 3250mV

WeMos Setup

I orderd 20x Wemos Mini D1 Pro (16Mbit) clones to use to run the sensors. I soldered the legs on in batches of 8

WeMos installed on breadboards ready to solder pins

Soldering was not perfect but worked

20x soldered wemos

Soldering is not perfect but each joint was triple tested.

Close up of soldered joints

I only had one dead WeMos.

I will set up the final units on ProtoBoards.

Protoboard

20x Wemos ready for service and the external aerial is glued down. The hot glue was a bad idea, I had to rotate a resistor under the hot glue.

20x wemos ready.

Revision

I ended up reordering the WeMos Mini’s and soldering on Female headers so I can add OLED screens

air mon enclosure

I added female headers to allow an OLED screen

new wemos

I purchased a microscope tpo be able to see better.

microscope

Each sensor will have a mini OLED screen.

mini oled screen

0.66″ OLED Screens

oled screen

I designed a PCB in Photoshop and had it turned into a PCB via https://www.fiverr.com/syedzamin12. I ordered 30x bloards from https://jlcpcb.com/

Custom PCB

The PCB’s fit inside the new enclosure perfectly

I am waiting for smaller screws to arrive.

PCB v0.2

I decided to design a board with 2 switches (and a light sensor to turn the screen off at night)

Breadboard Prototype

Prototype

I spoke to https://www.fiverr.com/syedzamin12 and withing 24 hours a PCB was designed

I Layers

This time I will get a purple PCB from JLCPCB and add a dinosaur for my son

Top PCB View

TOP PCB View

Back PCB View

Back PCB View

3D PC View

3D PCB view

JLCPCB made the board in 3 days

3 days

Now I need to wait a few weeks for the new PCB to arrive

Also, I finsihed the firmware for v0.2 PCB

I ordered some switches

I also ordered some reset buttons

I might add a larger 0.96″ OLED screen

Wifi and Static IP Test

I uploaded a skepch to each WeMos and tested the Wifi and Static IP thta was given.

Sketch

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>


#define SERVER_IP "192.168.0.50"

#ifndef STASSID
#define STASSID "wifi_ssid_name"
#define STAPSK  "************"
#endif

void setup() {

  Serial.begin(115200);

  Serial.println();
  Serial.println();
  Serial.println();

  WiFi.begin(STASSID, STAPSK);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected! IP address: ");
  Serial.println(WiFi.localIP());

}

void loop() {
  // wait for WiFi connection
  if ((WiFi.status() == WL_CONNECTED)) {

    WiFiClient client;
    HTTPClient http;

    Serial.print("[HTTP] begin...\n");
    // configure traged server and url
    http.begin(client, "http://" SERVER_IP "/api/v1/test"); //HTTP
    http.addHeader("Content-Type", "application/json");

    Serial.print("[HTTP] POST...\n");
    // start connection and send HTTP header and body
    int httpCode = http.POST("{\"hello\":\"world\"}");

    // httpCode will be negative on error
    if (httpCode > 0) {
      // HTTP header has been send and Server response header has been handled
      Serial.printf("[HTTP] POST... code: %d\n", httpCode);

      // file found at server
      if (httpCode == HTTP_CODE_OK) {
        const String& payload = http.getString();
        Serial.println("received payload:\n<<");
        Serial.println(payload);
        Serial.println(">>");
      }
    } else {
      Serial.printf("[HTTP] POST... failed, error: %s\n", http.errorToString(httpCode).c_str());
    }

    http.end();
  }

  delay(1000);
}

The Wemos booted, connected to WiFi, set and IP, and tried to post a request to a URL.

........................................................
Connected! IP address: 192.168.0.51
[HTTP] begin...
[HTTP] POST...
[HTTP] POST... failed, error: connection failed

The POST failed because my PI API Server was off.

Touch Screen Enclosure

I constructed a basic enclosure and screwed the touch screen to it. I need to find  aflexible black scrip to put around the screen and cover up the gaps.

Wooden box with the screen in it

The touch screen has been screwed in.

Screen screwed in

Over the Air Updating

I followed this guide and having the WeMos updatable over WiFi.

Basically, I installed the libraries “AsyncHTTPSRequest_Generic”, “AsyncElegantOTA”, “AsyncHTTPRequest_Generic”, “ESPAsyncTCP” and “ESPAsyncWebServer”.

Manage Libraries

A few libraries would not download so I manually downloaded the code from the GitHub repository from Confirm your account recovery settings (github.com) and then extracted them to my Documents\Arduino\libraries folder.

I then opened the exampel project “AsyncElegantOTA\ESP8266_Async_Demo”

I reviewed the code

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>

const char* ssid = "........";
const char* password = "........";

AsyncWebServer server(80);


void setup(void) {
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request) {
    request->send(200, "text/plain", "Hi! I am ESP8266.");
  });

  AsyncElegantOTA.begin(&server);    // Start ElegantOTA
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();
}
I added my Wifi SSID and password, saved the project and compiled a the code and wrote it to my WeMos Mini D1

I added LED Blink Code

void setup(void) {
  ...
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  ...
}
void loop(void) {
 ...
  delay(1000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(1000);                      // Wait for two seconds (to demonstrate the active low LED)
 ...
}

I compiled and tested the code

Now to get new code changes to the WeMos Mini via a binary, I edited the code (chnaged the LED blink speed) and clicked “Export Compiled Binary”

Compole Binary

When the binary compiled I opened the Sketch Folder

Show Sketch folder

I could see a bin file.

Bin File

I loaded the http://192.168.0.51/update and selected the bin file.

The new firmwaere applied.

Flashing

I navighated back to http://192.168.0.51

TIP: Ensure you add the starter sketch that has your wifi details in there.

Password Protection

I changed the code to add a basic passeord on access ad on OTA update

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>


//Saved Wifi Credentials (Research Encruption Later or store in FRAM Module?
const char* ssid = "your-wifi-ssid";
const char* password = "********";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(80);

void setup(void) {
  Serial.begin(115200);       //Serial Mode (Debug)
    
  WiFi.mode(WIFI_STA);        //Client Mode
  WiFi.begin(ssid, password); //Connect to Wifi
 
  Serial.println("");

  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success! ESP8266 #001.");
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);

  
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();

  digitalWrite(LED_BUILTIN, LOW);
  delay(8000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(8000);                      // Wait for two seconds (to demonstrate the active low LED)

}

Password prompt for users accessing the device.

Login scree

Password prompt for admin users accessing the device.

admin password protect

Later I will research encrypting the password and storing it on SPIFFS partition or a FRAM memory module.

Adding the DHT22 Sensors

I received my paxckl of DHT22 Sensors (AMT2302).

Specifications

  • Operating Voltage: 3.5V to 5.5V
  • Operating current: 0.3mA (measuring) 60uA (standby)
  • Output: Serial data
  • Temperature Range: 0°C to 50°C
  • Humidity Range: 20% to 90%
  • Resolution: Temperature and Humidity both are 16-bit
  • Accuracy: ±1°C and ±1%

I wired it up based on this Adafruit post.

DHT22 Wired Up on a breadboard.

DHT22 and Basic API Working

I will not bore you with hours or coding and debugging so here is my code thta

  • Allows the WeMos D1 Mini Prpo (ESP8266) to connect to WiFi
  • Web Server (with stats)
  • Admin page for OTA updates
  • Password Prpotects the main web folder and OTA admin page
  • Reading DHT Sensor values
  • Debug to serial Toggle
  • LED activity Toggle
  • Json Serialization
  • POST DHT22 data to an API on the Raspberry PI
  • Placeholder for API return values
  • Automatically posts data to the API ever 10 seconds
  • etc

Here is the work in progress ESP8288 Code

#include <ESP8266WiFi.h>        // https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/src/ESP8266WiFi.h
#include <ESPAsyncTCP.h>        // https://github.com/me-no-dev/ESPAsyncTCP
#include <ESPAsyncWebServer.h>  // https://github.com/me-no-dev/ESPAsyncWebServer
#include <AsyncElegantOTA.h>    // https://github.com/ayushsharma82/AsyncElegantOTA
#include <ArduinoJson.h>        // https://github.com/bblanchon/ArduinoJson
#include "DHT.h"                // https://github.com/adafruit/DHT-sensor-library
                                // Written by ladyada, public domain

//Todo: Add Authentication
//Fyi: https://api.gov.au/standards/national_api_standards/index.html

#include <ESP8266HTTPClient.h>  //POST Client

//Firmware Stats
bool bDEBUG = true;        //true = debug to Serial output
                           //false = no serial output
//Port Number for the Web Server
int WEB_PORT_NUMBER = 1337; 

//Post Sensor Data Delay
int POST_DATA_DELAY = 10000; 

bool bLEDS = true;         //true = Flash LED
                           //false =   NO LED's
//Device Variables
String sDeviceName = "ESP-002";
String sFirmwareVersion = "v0.1.0";
String sFirmwareDate = "27/10/2021 23:00";

String POST_SERVER_IP = "192.168.0.50";
String POST_SERVER_PORT = "";
String POST_ENDPOINT = "/api/v1/test";

//Saved Wifi Credentials (Research Encryption later and store in FRAM Module?
const char* ssid = "your_wifi_ssid";
const char* password = "***************";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(WEB_PORT_NUMBER);    //Feel free to chnage the port number

//DHT22 Temp Sensor
#define DHTTYPE DHT22   // DHT 22  (AM2302), AM2321
#define DHTPIN 5
DHT dht(DHTPIN, DHTTYPE);

//Common Variables
String thisBoard = ARDUINO_BOARD;
String sHumidity = "";
String sTempC = "";
String sTempF = "";
String sJSON = "{ }";

//DHT Variables
float h;
float t;
float f;
float hif;
float hic;


void setup(void) {

  //Turn On PIN
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  
  //Serial Mode (Debug)
  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  if (bDEBUG) Serial.begin(115200);
  if (bDEBUG) Serial.println("Serial Begin");

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }
  if (bDEBUG) Serial.println("Wifi Setup");
  if (bDEBUG) Serial.println(" - Client Mode");
  
  WiFi.mode(WIFI_STA);        //Client Mode
  
  if (bDEBUG) Serial.print(" - Connecting to Wifi: " + String(ssid));
  WiFi.begin(ssid, password); //Connect to Wifi
 
  if (bDEBUG) Serial.println("");
  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    if (bDEBUG) Serial.print(".");
  }
  if (bDEBUG) Serial.println("");
  if (bDEBUG) Serial.print("- Connected to ");
  if (bDEBUG) Serial.println(ssid);
  
  if (bDEBUG) Serial.print("IP address: ");
  if (bDEBUG) Serial.println(WiFi.localIP());

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  
  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    
        String sendHtml = "";
        sendHtml = sendHtml + "<html>\n";
        sendHtml = sendHtml + " <head>\n";
        sendHtml = sendHtml + " <title>ESP# 002</title>\n";
        sendHtml = sendHtml + " <meta http-equiv=\"refresh\" content=\"5\";>\n";
        sendHtml = sendHtml + " </head>\n";
        sendHtml = sendHtml + " <body>\n";
        sendHtml = sendHtml + " <h1>ESP# 002</h1>\n";
        sendHtml = sendHtml + " <u2>Debug</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Device Name: " + sDeviceName + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Version: " + sFirmwareVersion + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Date: " + sFirmwareDate + " </li>\n";
        sendHtml = sendHtml + " <li>Board: " + thisBoard + " </li>\n";
        sendHtml = sendHtml + " <li>Auto Refresh Root: On </li>\n";
        sendHtml = sendHtml + " <li>Web Port Number: " + String(WEB_PORT_NUMBER) +" </li>\n";
        sendHtml = sendHtml + " <li>Serial Debug: " + String(bDEBUG) +" </li>\n";
        sendHtml = sendHtml + " <li>Flash LED's Debug: " + String(bLEDS) +" </li>\n";
        sendHtml = sendHtml + " <li>SSID: " + String(ssid) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT TYPE: " + String(DHTTYPE) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT PIN: " + String(DHTPIN) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_DATA_DELAY: " + String(POST_DATA_DELAY) +" </li>\n";

        sendHtml = sendHtml + " <li>POST_SERVER_IP: " + String(POST_SERVER_IP) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_ENDPOINT: " + String(POST_ENDPOINT) +" </li>\n";
        
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>Sensor</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Humidity: " + sHumidity + "% </li>\n";
        sendHtml = sendHtml + " <li>Temp: " + sTempC + "c, " + sTempF + "f. </li>\n";
        sendHtml = sendHtml + " <li>Heat Index: " + String(hic) + "c, " + String(hif) + "f.</li>\n";
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>JSON</h2>";
        
        // Allocate the JSON document Object/Memory
        // Use https://arduinojson.org/v6/assistant to compute the capacity.
        StaticJsonDocument<250> doc;
        //JSON Values     
        doc["Name"] = sDeviceName;
        doc["humidity"] = sHumidity;
        doc["tempc"] = sTempC;
        doc["tempf"] = sTempF;
        doc["heatc"] = String(hic);
        doc["heatf"] = String(hif);
        
        sJSON = "";
        serializeJson(doc, sJSON);
        
        sendHtml = sendHtml + " <ul>" + sJSON + "</ul>\n";
        
        sendHtml = sendHtml + " <u2>Seed</h2>";
        long randNumber = random(100000, 1000000);
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <p>" + String(randNumber) + "</p>\n";
        sendHtml = sendHtml + " </ul>\n";
       
        sendHtml = sendHtml + " </body>\n";
        sendHtml = sendHtml + "</html>\n";
        //Send the HTML   
        request->send(200, "text/html", sendHtml);
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);
  
  server.begin();
  if (bDEBUG) Serial.println("HTTP server started");
 
  if (bDEBUG) Serial.println("Board: " + thisBoard);

  //Setup the DHT22 Object
  dht.begin();
  
}

void loop(void) {

  AsyncElegantOTA.loop();

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }


  //Display Temp and Humidity Data

  h = dht.readHumidity();
  t = dht.readTemperature();
  f = dht.readTemperature(true);

  // Check if any reads failed and exit early (to try again).
  if (isnan(h) || isnan(t) || isnan(f)) {
    if (bDEBUG) Serial.println(F("Failed to read from DHT sensor!"));
    return;
  }
  
  hif = dht.computeHeatIndex(f, h);         // Compute heat index in Fahrenheit (the default)
  hic = dht.computeHeatIndex(t, h, false);  // Compute heat index in Celsius (isFahreheit = false)

  if (bDEBUG) Serial.print(F("Humidity: "));
  if (bDEBUG) Serial.print(h);
  if (bDEBUG) Serial.print(F("%  Temperature: "));
  if (bDEBUG) Serial.print(t);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(f);
  if (bDEBUG) Serial.print(F("°F  Heat index: "));
  if (bDEBUG) Serial.print(hic);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(hif);
  if (bDEBUG) Serial.println(F("°F"));

  //Save for Page Load
  sHumidity = String(h,2);
  sTempC = String(t,2);
  sTempF = String(f,2);

  //Post to Pi API
    // Allocate the JSON document Object/Memory
    // Use https://arduinojson.org/v6/assistant to compute the capacity.
    StaticJsonDocument<250> doc;
    //JSON Values     
    doc["Name"] = sDeviceName;
    doc["humidity"] = sHumidity;
    doc["tempc"] = sTempC;
    doc["tempf"] = sTempF;
    doc["heatc"] = String(hic);
    doc["heatf"] = String(hif);
    
    sJSON = "";
    serializeJson(doc, sJSON);

    //Post to API
    if (bDEBUG) Serial.println(" -> POST TO API: " + sJSON);

   //Test POST
  
    if ((WiFi.status() == WL_CONNECTED)) {
  
      WiFiClient client;
      HTTPClient http;
  
    
      if (bDEBUG) Serial.println(" -> API Endpoint: http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT);
      http.begin(client, "http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT); //HTTP


      if (bDEBUG) Serial.println(" -> addHeader: \"Content-Type\", \"application/json\"");
      http.addHeader("Content-Type", "application/json");
  
      // start connection and send HTTP header and body
      int httpCode = http.POST(sJSON);
      if (bDEBUG) Serial.print("  -> Posted JSON: " + sJSON);
  
      // httpCode will be negative on error
      if (httpCode > 0) {
        // HTTP header has been send and Server response header has been handled

  
        //See https://api.gov.au/standards/national_api_standards/api-response.html 
        // Response from Server
        if (bDEBUG) Serial.println("  <- Return Code: " + httpCode);
                
        //Get the Payload
        const String& payload = http.getString();
          if (bDEBUG) Serial.println("   <- Received Payload:");
          if (bDEBUG) Serial.println(payload);
          if (bDEBUG) Serial.println("   <- Payload (httpcode: 201):");
          

         //Hnadle the HTTP Code
        if (httpCode == 200) {
          if (bDEBUG) Serial.println("  <- 200: Invalid API Call/Response Code");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 201) {
          if (bDEBUG) Serial.println("  <- 201: The resource was created. The Response Location HTTP header SHOULD be returned to indicate where the newly created resource is accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 202) {
          if (bDEBUG) Serial.println("  <- 202: Is used for asynchronous processing to indicate that the server has accepted the request but the result is not available yet. The Response Location HTTP header may be returned to indicate where the created resource will be accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 400) {
          if (bDEBUG) Serial.println("  <- 400: The server cannot process the request (such as malformed request syntax, size too large, invalid request message framing, or deceptive request routing, invalid values in the request) For example, the API requires a numerical identifier and the client sent a text value instead, the server will return this status code.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 401) {
          if (bDEBUG) Serial.println("  <- 401: The request could not be authenticated.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 403) {
          if (bDEBUG) Serial.println("  <- 403: The request was authenticated but is not authorised to access the resource.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 404) {
          if (bDEBUG) Serial.println("  <- 404: The resource was not found.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 415) {
          if (bDEBUG) Serial.println("  <- 415: This status code indicates that the server refuses to accept the request because the content type specified in the request is not supported by the server");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 422) {
          if (bDEBUG) Serial.println("  <- 422: This status code indicates that the server received the request but it did not fulfil the requirements of the back end. An example is a mandatory field was not provided in the payload.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 500) {
          if (bDEBUG) Serial.println("  <- 500: An internal server error. The response body may contain error messages.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }

        
      } else {
        if (bDEBUG) Serial.println("   <- Unknown Return Code (ERROR): " + httpCode);
        //if (bDEBUG) Serial.printf("    " + http.errorToString(httpCode).c_str());
        
      }

    }

    if (bDEBUG) Serial.print("\n\n");

    delay(POST_DATA_DELAY);
  }

Here is a screenshot of the Arduino IDE Serial Monitor debugging the code

Serial Monitor

Here is a screenshot of the NodeJS API on the raspberry Pi accepting the POSTed data from the ESP8266

API receiving data

Here is a sneak peek of the code accpeing the Posted Data

API COde

The final code will be open sourced.

API with 2x sensors (18x more soon)

I built 2 sensors (on Breadboards) to start hitting the API

2 sensors on a breadboard

18 more sensors are ready for action (after I get tempporary USB power sorted)

18x Sensors

PiJuice and Battery save the Day

I accidentally used my Pi for a few hours (to develop the API) and I realised the power to the PiJuice was not connected.

The PiJuice worked a treat and supplied the Pi from battery

Battery power was disconnected

I plugged in the battery after 25% was drained.

Power Restored/

Research and Setup TRIM/Defrag on the M.2 SSD

Todo: Research

Add a Buzzer to the RaspBerry Pi and Connect to Pi Juice No Power Event

Todo

Wire Up a Speaker to the PiJuice

Todo: Figure out cusrom scripts and add a Piezo Speaker to the PiJuice to alert me of issues in future.

Add buttons to the enclosure

Todo

Add email alerts from the system

I logged into Google G-Suite (my domain’s email provider) and set up an email alias for my domain “[email protected]”, I added this alias to GMail (logged in with my GSuite account.

I created an app-specific password at G-Suite to allow my poi to use a dedicated password to access my email.

I installed these packages on the Raspberry Pi

sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail    

I can run this command to send an email to my primary email

sudo sendemail -f [email protected] -t [email protected]_domain.com -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected]_domain.com -xp ********************

The email arrives from the Raspberry Pi

Test Email Screenshot

PiJuice Alerts (email)

In created some python scripts and configured PiJuice to Email Me

user scripts

I assigned the scripts to Events

Added functions

Python Script (CronJob) to email the batteruy level every 6 hours

Todo

Building the Co2/PM2.5 Sensors

Todo: (Waiting for parts)

The AirGradient PCB’s have arrived

Air Gradient PCB's

NodeJS API writing to MySQL/Influx etc

Todo: Save Data to a Database

Setup 20x WeMos External Antennae’s (DONE, I ordered new factory rotated resistors)

I assumed the external antennae’s on the WeMos D1 Mini Pro’s were using the external antennae. Wrong.

I have to move 20x resistors (1 per WeMos) to switch over the the external antennae.

This will be fun as I added hot glue over the area where the resistior is to hold down the antennae.

Reading configuration files via SPIFFS

Todo

Power over Ethernet (PoE) (SKIP, WIll use plain old USB wall plugs)

Todo: Passive por PoE

Building a C# GUI for the Touch Panel

Todo (Started coding this)

Todo (Passive POE, 5v, 3.3v)?

Building the enclosures for the sensorsDesigned and ordered the PCB, FIrmware next.

Custom PCB?

Yes, See above

Backing up the Raspberry Pi M.2 Drive

This is quite easy as the M.2 Drive is connected to a USB Pliug. I shutdown the Pi and pugged int he M.2 board to my PC

I then Backed up the entire disk to my PC with Acronis Software (review here)

I now have a complete backup of my Pi on a remote network share (and my primary pc).

Version History

v0.9.63 – PCB v0.2 Designed and orderd.

v0.9.62 – 3/2/2022 Update

v0.9.61 – New Nginx, PHP, MySQL etc

v0.9.60 – Fresh Bullseye install (Buster upgrade failed)

v0.951 Email Code in PiJUice

v0.95 Added Email Code

v0.94 Added Todo Areas.

v0.93 2x Sensors hitting the API, 18x sensors ready, Air Gradient

v0.92 DHT22 and Basic API

v0.91 Password Protection

v0.9 Final Battery Setup

v0.8 OTA Updates

v0.7 Screen Enclosure

v0.6 Added Wifi Test Info

v0.5 Initial Post

Filed Under: Analytics, API, Arduino, Cloud, Code, GUI, IoT, Linux, MySQL, NGINX, NodeJS, OS Tagged With: api, ESP8266, MySQL, nginx, raspberry pi, WeMos

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.

December 22, 2020 by Simon

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance. Here is what I did to set up a complete Ubuntu 18.04 system (NGINX, PHP, MySQL, WordPress etc). This is not a paid review (just me documenting my steps over 2 days).

Background (CPanel hosts)

In 1999 I hosted my first domain (www.fearby.com) on a host in Seattle (for $10 USD a month), the host used CPanel and all was good.  After a decade I was using the domain more for online development and the website was now too slow (I think I was on dial-up or ADSL 1 at the time). I moved my domain to an Australian host (for $25 a month).

After 8 years the domain host was sold and performance remained mediocre. After another year the new host was sold again and performance was terrible.

I started receiving Resource Limit Is Reached warnings (basically this was a plot by the new CPanel host to say “Pay us more and this message will go away”).

Page load times were near 30 seconds.

cpenal_usage_exceeded

The straw that broke the camel’s back was their demand of $150/year for a dodgy SSL certificate.

I needed to move to a self-managed server where I was in control.

Buying a Domain Name

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Self Managed Server

I found a good web IDE ( http://www.c9.io/ ) that allowed me to connect to a cloud VM.  C9 allowed me to open many files and terminal windows and reconnect to them later. Don’t get excited, though, as AWS has purchased C9 and it’s not the same.

C9 IDE

C9 IDE

I spun up a Digital Ocean Server at the closest data centre in Singapore. Here was my setup guide creating a Digital Ocean VM, connecting to it with C9 and configuring it. I moved my email to G Suite and moved my WordPress to Digital Ocean (other guides here and here).

I was happy since I could now send emails via CLI/code, set up free SSL certs, add second domain email to G Suite and Secure G Suite. No more usage limit errors either.

Self-managing servers require more work but it is more rewarding (flexible, faster and cheaper).  Page load times were now near 20 seconds (10-second improvement).

Latency Issue

Over 6 months, performance on Digital Ocean (in Singapore) from Australia started to drop (mentioned here).  I tried upgrading the memory but that did not help (latency was king).

Moved the website to Australia

I moved my domain to Vultr in Australia (guide here and here). All was good for a year until traffic growth started to increase.

Blog Growth

I tried upgrading the memory on Vultr and I setup PHP child workers, set up Cloudflare.

GT Metrix scores were about a “B” and Google Page Speed Scores were in the lower 40’s. Page loads were about 14 seconds (5-second improvement).

Tweaking WordPress

I set up an image compression plugin in WordPress then set up a cloud image compression and CDN Plugin from the same vendor.  Page Speed info here.

GT Metrix scores were now occasionally an “A” and Page Speed scores were in the lower 20’s. Page loads were about 3-5 seconds (10-second improvement).

A mixed bag from Vultr (more optimisation and performance improvements were needed).

This screenshot is showing poor www.gtmetrix.com scores , pool google page speed index scores and upgrading from 1GB to 2GB memory on my server.

Google Chrome Developer Console Audit Results on Vultr hosted website were not very good (I stopped checking as nothing helped).

This is a screenshot showing poor site performance (screenshot taken in Google Dev tools audit feature)

The problem was the Vultr server (400km away in Sydney) was offline (my issue) and everything above (adding more memory, adding 2x CDN’s (EWWW and Cloudflare), adding PHP Child workers etc) did not seem to help???

Enter UpCloud…

Recently, a friend sent a link to a blog article about a host called “UpCloud” who promised “Faster than SSD” performance.  This can’t be right: “Faster than SSD”? I was intrigued. I wanted to check it out as I thought nothing was faster than SSD (well, maybe RAM).

I signed up for a trial and ran a disk IO test (read the review here) and I was shocked. It’s fast. Very fast.

Summary: UpCloud was twice as fast (Disk IO and CPU) as Vultr (+ an optional $4/m firewall and $3/m for 1x backup).

This is a screenshot showing Vultr.com servers getting half the read and write disk io performance compared to upcloud.com.

fyi: Labels above are K Bytes per second. iozone loops through all file size from 4 KB to 16,348 KB and measures through the reads per second. To be honest, the meaning of the numbers doesn’t interest me, I just want to compare apples to apples.

This is am image showing iozone results breakdown chart (kbytes per sec on vertical axis, file size in horizontal axis and transfer size on third access)

(image snip from http://www.iozone.org/ which explains the numbers)

I might have to copy my website on UpCloud and see how fast it is.

Where to Deploy and Pricing

UpCloud Pricing: https://www.upcloud.com/pricing/

UpCloud Pricing

UpCloud does not have a data centre in Australia yet so why choose UpCloud?

Most of my site’s visitors are based in the US and UpCloud have disk IO twice as fast as Vultr (win-win?).  I could deploy to Chicago?

This image sows most of my visitors are in the US

My site’s traffic is growing and I need to ensure the site is fast enough in the future.

This image shows that most of my sites visitors are hitting my site on week days.

Creating an UpCloud VM

I used a friend’s referral code and signed up to create my first VM.

FYI: use my Referral code and get $25 free credit.  Sign up only takes 2 minutes.

https://www.upcloud.com/register/?promo=D84793

When you click the link above you will receive 25$ to try out serves for 3 days. You can exit his trail and deposit $10 into UpCloud.

Trial Limitations

The trial mode restrictions are as following:

* Cloud servers can only be accessed using SSH, RDP, HTTP or HTTPS protocols
* Cloud servers are not allowed to send outgoing e-mails or to create outbound SSH/RDP connections
* The internet connection is restricted to 100 Mbps (compared to 500 Mbps for non-trial accounts)
* After your 72 hours free trial, your services will be deleted unless you make a one-time deposit of $10

UpCloud Links

The UpCloud support page is located here: https://www.upcloud.com/support/

  • Quick start: Introduction to UpCloud
  • How to deploy a Cloud Server
  • Deploy a cloud server with UpCloud’s API

More UpCloud links to read:

  • Two-Factor Authentication on UpCloud
  • Floating IPs on UpCloud
  • How to manage your firewall
  • Finalizing deployment

Signing up to UpCloud

Navigate to https://upcloud.com/signup and add your username, password and email address and click signup.

New UpCloud Signup Page

Add your address and payment details and click proceed (you don’t need to pay anything ($1 may be charged and instantly refunded to verify the card)

Add address and payment details

That’s it, check yout email.

Signup Done

Look for the UpCloud email and click https://my.upcloud.com/

Check Email

Now login

Login to UpCloud

Now I can see a dashboard 🙂

UpCloud Dashboard

I was happy to see 24/7 support is available.

This image shows the www.upcloud.com live chat

I opted in for the new dashboard

UpCloud new new dashboard

Deploy My First UpCloud Server

This is how I deployed a server.

Note: If you are going to deploy a server consider using my referral code and get $25 credit for free.

Under the “deploy a server” widget I named the server and chose a location (I think I was supposed to use an FQDN name -e.g., “fearby.com”). The deployment worked though. I clicked continue, then more options were made available:

  1. Enter a short server description.
  2. Choose a location (Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago)
  3. Choose the number of CPU’s and amount of memory
  4. Specify disk number/names and type (MaxIOPS or HDD).
  5. Choose an Operating System
  6. Select a Timezone
  7. Define SSH Keys for access
  8. Allowed login methods
  9. Choose hardware adapter types
  10. Where the send the login password

Deploy Server

FYI: How to generate a new SSH Key (on OSX or Ubuntu)

ssh-keygen -t rsa

Output

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /temp/example_rsa
Enter passphrase (empty for no passphrase): *********************************
Enter same passphrase again:*********************************
Your identification has been saved in /temp/example_rsa.
Your public key has been saved in /temp/example_rsa.pub.
The key fingerprint is:
SHA256:########################### [email protected]
Outputted public and private key

Did the key export? (yes)

> /temp# ls /temp/ -al
> drwxr-xr-x 2 root root 4096 Jun 9 15:33 .
> drwxr-xr-x 27 root root 4096 Jun 8 14:25 ..
> -rw——- 1 user user 1766 Jun 9 15:33 example_rsa
> -rw-r–r– 1 user user 396 Jun 9 15:33 example_rsa.pub

“example_rsa” is the private key and “example_rsa.pub “is the public key.

  • The public key needs to be added to the server to allow access.
  • The private key needs to be added to any local ssh program used for remote access.

Initialisation script (after deployment)

I was pleased to see an initialization script section that calls actions after the server is deployed. I configured the initialisation script to pull down a few GB of backups from my Vultr website in Sydney (files now removed).

This was my Initialisation script:

#!/bin/bash
echo "Downloading the Vultr websites backups"
mkdir /backup
cd /backup
wget -o www-mysql-backup.sql https://fearby.com/.../www-mysql-backup.sql
wget -o www-blog-backup.zip https://fearby.com/.../www-blog-backup.zip

Confirm and Deploy

I clicked “Confirm and deploy” but I had an alert that said trial mode can only deploy servers up to 1024MB of memory.

This image shows I cant deploy servers with 2/GB in trial modeExiting UpCloud Trial Mode

I opened the dashboard and clicked My Account then Billing, I could see the $25 referral credit but I guess I can’t use that in Trial.

I exited trial mode by depositing $10 (USD).

View Billing Details

Make a manual 1-time deposit of $10 to exit trial mode.

Deposit $10 to exit the trial

FYI: Server prices are listed below (or view prices here).

UpCloud Pricing

Now I can go back and deploy the server with the same settings above (1x CPU, 2GB Memory, Ubuntu 18.04, MaxIOPS Storage etc)

Deployment takes a few minutes and depending on how you specified a password may be emailed to you.

UpCloud Server Deployed

The server is now deployed; now I can connect to it with my SSH program (vSSH).  Simply add the server’s IP, username, password and the SSH private key (generated above) to your ssh program of choice.

fyi: The public key contents start with “ssh-rsa”.

This image shows me connecting to my sever via ssh

I noticed that the initialisation script downloaded my 2+GB of files already. Nice.

UpCloud Billing Breakdown

I can now see on the UpCloud billing page in my dashboard that credit is deducted daily (68c); at this rate, I have 49 days credit left?

Billing Breakdown

I can manually deposit funds or set up automatic payments at any time 🙂

UpCloud Backup Options

You do not need to setup backups but in case you want to roll back (if things stuff up), it is a good idea. Backups are an additional charge.

I have set up automatic daily backups with an auto deletion after 2 days

To view backup scheduled click on your deployed server then click backup

List of UpCloud Backups

Note: Backups are charged at $0.056 for every GB stored – so $5.60 for every 100GB per month (half that for 50GB etc)

You can take manual backups at any time (and only be charged for the hour)

UpCloud Firewall Options

I set up a firewall at UpCloud to only allow the minimum number of ports (UpCloud DNS, HTTP, HTTPS and My IP to port 22).  The firewall feature is charged at $0.0056 an hour ($4.03 a month)

I love the ability to set firewall rules on incoming, destination and outgoing ports.

To view your firewall click on your deployed server then click firewall

UpCloud firewall

Update: I modified my firewall to allow inbound ICMP (IPv4/IPv6) and UDP (IPv4/IPv6) packets.

(Note: Old firewall screenshot)

Firewall Rules Allow port 80, 443 and DNS

Because my internet provider has a dynamic IP, I set up a VPN with a static IP and whitelisted it for backdoor access.

Local Ubuntu ufw Firewall

I duplicated the rules in my local ufw (2nd level) firewall (and blocked mail)

sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] 25                         DENY OUT    Anywhere                   (out)
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 22                         ALLOW IN    REMOVED (MY WHITELISTED IP))
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 25 (v6)                    DENY OUT    Anywhere (v6)              (out)
[10] 53                         ALLOW IN    2a04:3540:53::1
[11] 53                         ALLOW IN    2a04:3544:53::1

UpCloud Download Speeds

I pulled down a 1.8GB Ubuntu 18.08 Desktop ISO 3 times from gigenet.com and the file downloaded in 32 seconds (57MB/sec). Nice.

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:04-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso'

ubuntu-18.04-desktop-amd64.iso 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:02:37 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:46-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.1'

ubuntu-18.04-desktop-amd64.iso.1 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:19 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.1' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:03:23-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.2'

ubuntu-18.04-desktop-amd64.iso.2 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:56 (56.8 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.2' saved [1921843200/1921843200]

Install Common Ubuntu Packages

I installed common Ubuntu packages.

apt-get install zip htop ifstat iftop bmon tcptrack ethstatus speedometer iozone3 bonnie++ sysbench siege tree tree unzip jq jq ncdu pydf ntp rcconf ufw iperf nmap iozone3

Timezone

I checked the server’s time (I thought this was auto set before I deployed)?

$hwclock --show
2018-06-06 23:52:53.639378+0000

I reset the time to Australia/Sydney.

dpkg-reconfigure tzdata
Current default time zone: 'Australia/Sydney'
Local time is now: Thu Jun 7 06:53:20 AEST 2018.
Universal Time is now: Wed Jun 6 20:53:20 UTC 2018.

Now the timezone is set 🙂

Shell History

I increased the shell history.

HISTSIZEH =10000
HISTCONTROL=ignoredups

SSH Login

I created a ~/.ssh/authorized_keys file and added my SSH public key to allow password-less logins.

mkdir ~/.ssh
sudo nano ~/.ssh/authorized_keys

I added my pubic ssh key, then exited the ssh session and logged back in. I can now log in without a password.

Install NGINX

apt-get install nginx

nginx/1.14.0 is now installed.

A quick GT Metrix test.

This image shows awesome static nginx performance ratings of of 99%

Install MySQL

Run these commands to install and secure MySQL.

apt install mysql-server
mysql_secure_installation

Securing the MySQL server deployment.
> Would you like to setup VALIDATE PASSWORD plugin?: n
> New password: **********************************************
> Re-enter new password: **********************************************
> Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
> Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
> Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
> Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
> Success.

I disabled the validate password plugin because I hate it.

MySQL Ver 14.14 Distrib 5.7.22 is now installed.

Set MySQL root login password type

Set MySQL root user to authenticate via “mysql_native_password”. Run the “mysql” command.

mysql
SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | | auth_socket | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+----------

Now let’s set the root password authentication method to “mysql_native_password”

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '*****************************************';
Query OK, 0 rows affected (0.00 sec)

Check authentication method.

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | ######################################### | mysql_native_password | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+

Now we need to flush permissions.

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Done.

Install PHP

Install PHP 7.2

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php7.2
php -v

PHP 7.2.5, Zend Engine v3.2.0 with Zend OPcache v7.2.5-1 is now installed. Do update PHP frequently.

I made the following changes in /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Install PHP FPM

apt-get install php7.2-fpm

Configure PHP FPM config.

Edit /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Reload php sudo service.

php7.2-fpm restart service php7.2-fpm status

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Configuring NGINX

If you are not comfortable editing NGINX config files read here, here and here.

I made a new “www root” folder, set permissions and created a default html file.

mkdir /www-root
chown -R www-data:www-data /www-root
echo "Hello World" >> /www-root/index.html

I edited the “root” key in “/etc/nginx/sites-enabled/default” file and set the root a new location (e.g., “/www-root”)

I added these performance tweaks to /etc/nginx/nginx.conf

> worker_cpu_affinity auto;
> worker_rlimit_nofile 100000

I add the following lines to “http {” section in /etc/nginx/nginx.conf

client_max_body_size 10M;

gzip on;
gzip_disable "msie6";
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_types
application/atom+xml
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
font/opentype
image/bmp
image/x-icon
text/cache-manifest
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
#text/html is always compressed by gzip module

gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss te$

Check NGINX Status

service nginx status
* nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 21:16:28 AEST; 30min ago
Docs: man:nginx(8)
Main PID: # (nginx)
Tasks: 2 (limit: 2322)
CGroup: /system.slice/nginx.service
|- # nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
`- # nginx: worker process

Install Open SSL that supports TLS 1.3

This is a work in progress. The steps work just fine for me on Ubuntu 16.04. but not Ubuntu 18.04.?

Installing Adminer MySQL GUI

I will use the PHP based Adminer MySQL GUI to export and import my blog from one server to another. All I needed to do is install it on both servers (simple 1 file download)

cd /utils
wget -o adminer.php https://github.com/vrana/adminer/releases/download/v4.6.2/adminer-4.6.2-mysql-en.php

Use Adminer to Export My Blog (on Vultr)

On the original server open Adminer (http) and..

  1. Login with the MySQL root account
  2. Open your database
  3. Choose “Save” as the output
  4. Click on Export

This image shows the export of the wordpress adminer page

Save the “.sql” file.

I used Adminer on the UpCloud server to Import My Blog

FYI: Depending on the size of your database backup you may need to temporarily increase your upload and post sizes limits in PHP and NGINX before you can import your database.

Edit /etc/php/7.2/fpm/php.ini
> max_file_uploads = 100M
> post_max_size =100M

And Edit: /etc/nginx/nginx.conf
> client_max_body_size 100M;

Don’t forget to reload NGINX config and restart NGINX and PHP. Take note of the maximum allowed file size in the screenshot below. I temporarily increased my upload limits to 100MB in order to restore my 87MB blog.

Now I could open Adminer on my UpCloud server.

  1. Create a new database
  2. Click on the database and click Import
  3. Choose the SQL file
  4. Click Execute to import it

Import MuSQL backup with Adminer

Don’t forget to create a user and assign permissions (as required – check your wp-config.php file).

Import MySQL Database

Tip: Don’t forget to lower the maximum upload file size and max post size after you import your database,

Cloudflare DNS

I use Cloudflare to manage DNS, so I need to tell it about my new server.

You can get your server’s IP details from the UpCloud dashboard.

Find IP

At Cloudflare update your DNS details to point to the server’s new IPv4 (“A Record”) and IPv6 (“AAAA Record”).

Cloudflare DNS

Domain Error

I waited an hour and my website was suddenly unavailable.  At first, I thought this was Cloudflare forcing the redirection of my domain to HTTP (that was not yet set up).

DNS Not Replicated Yet

I chatted with UpCloud chat on their webpage and they kindly assisted me to diagnose all the common issues like DNS values, DNS replication, Cloudflare settings and the error was pinpointed to my NGINX installation.  All NGINX config settings were ok from what we could see?  I uninstalled NGINX and reinstalled it (and that fixed it). Thanks UpCloud Support 🙂

Reinstalled NGINX

sudo apt-get purge nginx nginx-common

I reinstalled NGINX and reconfigured /etc/nginx/nginx.conf (I downloaded my SSL cert from my old server just in case).

Here is my /etc/nginx/nginx.conf file.

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/www-nginxcriterror.log crit;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        client_max_body_size 10M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        server_tokens off;

        server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/www-access.log;
        error_log /var/log/nginx/www-error.log;

        gzip on;

        gzip_vary on;
        gzip_disable "msie6";
        gzip_min_length 256;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-available/default file (fyi, I have not fully re-setup TLS 1.3 yet so I commented out the settings)

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-root;

        # Listen Ports
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name www.fearby.com fearby.com localhost;

        # HTTPS Cert
        ssl_certificate /etc/nginx/ssl-cert-path/fearby.crt;
        ssl_certificate_key /etc/nginx/ssl-cert-path/fearby.key;
        ssl_dhparam /etc/nginx/ssl-cert-path/dhparams4096.pem;

        # HTTPS Ciphers
        
        # TLS 1.2
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

        # TLS 1.3			#todo
        # ssl_ciphers 
        # ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DES-CBC3-SHA;
        # ssl_ecdh_curve secp384r1;

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # HTTPS Settings
        server_tokens off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 30m;
        ssl_session_tickets off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
	#ssl_stapling on; 						# Requires nginx >= 1.3.7

        # Cloudflare DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;

        # PHP Memory 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
            try_files $uri =404;
            # include snippets/fastcgi-php.conf;

            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php7.2-fpm.sock;

            # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
            # fastcgi_pass 127.0.0.1:9000;
	    }

        location / {
            # try_files $uri $uri/ =404;
            try_files $uri $uri/ /index.php?q=$uri&$args;
            index index.php index.html index.htm;
            proxy_set_header Proxy "";
        }

        # Deny Rules
        location ~ /.ht {
                deny all;
        }
        location ~ ^/.user.ini {
            deny all;
        }
        location ~ (.ini) {
            return 403;
        }

        # Headers
        location ~* .(?:ico|css|js|gif|jpe?g|png|js)$ {
            expires 30d;
            add_header Pragma public;
            add_header Cache-Control "public";
        }

}

SSL Labs SSL Certificate Check

All good thanks to the config above.

SSL Labs

Install WP-CLI

I don’t like setting up FTP to auto-update WordPress plugins. I use the WP-CLI tool to manage WordPress installations by the command line. Read my blog here on using WP-CLI.

Download WP-CLI

mkdir /utils
cd /utils
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Move WP-CLI to the bin folder as “wp”

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Test wp

wp --info
OS: Linux 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
Shell: /bin/bash
PHP binary: /usr/bin/php7.2
PHP version: 7.2.5-1+ubuntu18.04.1+deb.sury.org+1
php.ini used: /etc/php/7.2/cli/php.ini
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /www-root
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.1

Update WordPress Plugins

Now I can run “wp plugin update” to update all WordPress plugins

wp plugin update
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/plugin/wordfence.7.1.7.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wp-meta-seo.3.7.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wordpress-seo.7.6.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Disabling Maintenance mode...
Success: Updated 3 of 3 plugins.
+---------------+-------------+-------------+---------+
| name | old_version | new_version | status |
+---------------+-------------+-------------+---------+
| wordfence | 7.1.6 | 7.1.7 | Updated |
| wp-meta-seo | 3.7.0 | 3.7.1 | Updated |
| wordpress-seo | 7.5.3 | 7.6.1 | Updated |
+---------------+-------------+-------------+---------+

Update WordPress Core

WordPress core file can be updated with “wp core update“

wp core update
Success: WordPress is up to date.

Troubleshooting: Use the flag “–allow-root “if wp needs higher access (unsafe action though).

Install PHP Child Workers

I edited the following file to setup PHP child workers /etc/php/7.2/fpm/pool.d/www.conf

Changes

> pm = dynamic
> pm.max_children = 40
> pm.start_servers = 15
> pm.min_spare_servers = 5
> pm.max_spare_servers = 15
> pm.process_idle_timeout = 30s;
> pm.max_requests = 500;
> php_admin_value[error_log] = /var/log/www-fpm-php.www.log
> php_admin_value[memory_limit] = 512M

Restart PHP

sudo service php7.2-fpm restart

Test NGINX config, reload NGINX config and restart NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Output (14 workers are ready)

Check PHP Child Worker Status

sudo service php7.2-fpm status
* php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 19:32:47 AEST; 20s ago
Docs: man:php-fpm7.2(8)
Main PID: # (php-fpm7.2)
Status: "Processes active: 0, idle: 15, Requests: 2, slow: 0, Traffic: 0.1req/sec"
Tasks: 16 (limit: 2322)
CGroup: /system.slice/php7.2-fpm.service
|- # php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
- # php-fpm: pool www

Memory Tweak (set at your own risk)

sudo nano /etc/sysctl.conf

vm.swappiness = 1

Setting swappiness to a value of 1 all but disables the swap file and tells the Operating System to aggressively use ram, a value of 10 is safer. Only set this if you have enough memory available (and free).

Possible swappiness settings:

> vm.swappiness = 0 Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition when free memory will be below vm.min_free_kbytes limit, but in later versions, this is achieved by setting to 1.[2]> vm.swappiness = 1 Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
> vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system.[3]
> vm.swappiness = 60 The default value.
> vm.swappiness = 100 The kernel will swap aggressively.

The “htop” tool is a handy memory monitoring tool to “top”

Also, you can use good old “watch” command to show near-live memory usage (auto-refreshes every 2 seconds)

watch -n 2 free -m

Script to auto-clear the memory/cache

As a habit, I am setting up a cronjob to check when free memory falls below 100MB, then the cache is automatically cleared (freeing memory).

Script Contents: clearcache.sh

#!/bin/bash

# Script help inspired by https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
ram_use=$(free -m)
IFS=

I set the cronjob to run every 15 mins, I added this to my cronjob.

SHELL=/bin/bash
*/15  *  *  *  *  root /bin/bash /scripts/clearcache.sh >> /scripts/clearcache.log

Sample log output

2018-06-10 01:13:22 RAM OK (Total: 1993 MB, Used: 981 MB, Free: 387 MB)
2018-06-10 01:15:01 RAM OK (Total: 1993 MB, Used: 974 MB, Free: 394 MB)
2018-06-10 01:20:01 RAM OK (Total: 1993 MB, Used: 955 MB, Free: 412 MB)
2018-06-10 01:25:01 RAM OK (Total: 1993 MB, Used: 1002 MB, Free: 363 MB)
2018-06-10 01:30:01 RAM OK (Total: 1993 MB, Used: 970 MB, Free: 394 MB)
2018-06-10 01:35:01 RAM OK (Total: 1993 MB, Used: 963 MB, Free: 400 MB)
2018-06-10 01:40:01 RAM OK (Total: 1993 MB, Used: 976 MB, Free: 387 MB)
2018-06-10 01:45:01 RAM OK (Total: 1993 MB, Used: 985 MB, Free: 377 MB)
2018-06-10 01:50:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 379 MB)
2018-06-10 01:55:01 RAM OK (Total: 1993 MB, Used: 979 MB, Free: 382 MB)
2018-06-10 02:00:01 RAM OK (Total: 1993 MB, Used: 980 MB, Free: 380 MB)
2018-06-10 02:05:01 RAM OK (Total: 1993 MB, Used: 971 MB, Free: 389 MB)
2018-06-10 02:10:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 376 MB)
2018-06-10 02:15:01 RAM OK (Total: 1993 MB, Used: 967 MB, Free: 392 MB)

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

sudo cat /proc/sys/vm/swappiness
1

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may lose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

tune2fs -o journal_data_writeback /dev/vda1

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

sudo nano /etc/fstab

I added “writeback,noatime,nodiratime” flags to my disk options.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options> <dump>  <pass>
# / was on /dev/vda1 during installation
#                <device>                 <dir>           <fs>    <options>                                             <dump>  <fsck>
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro,data=writeback,noatime,nodiratime   0       1

Updating Ubuntu Packages

Show updatable packages.

apt-get -s dist-upgrade | grep "^Inst"

Update Packages.

sudo apt-get update && sudo apt-get upgrade

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

sudo apt-get install unattended-upgrades

Enable Unattended Upgrades.

sudo dpkg-reconfigure --priority=low unattended-upgrades

Now I configure what packages not to auto-update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

Unattended-Upgrade::Package-Blacklist {
"nginx";
"nginx-common";
"nginx-core";
"php7.2";
"php7.2-fpm";
"mysql-server";
"mysql-server-5.7";
"mysql-server-core-5.7";
"libssl1.0.0";
"libssl1.1";
};

FYI: You can find installed packages by running this command:

apt list --installed

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

unattended-upgrade -d

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows creating democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepaid model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user-defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

Disable dasjhicons.min.css (for unauthenticated WordPress users).

Find functions.php in the www root

sudo find . -print |grep  functions.php

Edit functions.php

sudo nano ./wp-includes/functions.php

Add the following

// Remove dashicons in frontend for unauthenticated users
add_action( 'wp_enqueue_scripts', 'bs_dequeue_dashicons' );
function bs_dequeue_dashicons() {
    if ( ! is_user_logged_in() ) {
        wp_deregister_style( 'dashicons' );
    }
}

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers

server {
        root /www;

        ...
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;
        ...

I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

location = /http2/push_demo.html {
        http2_push /http2/pushed.css;
        http2_push /http2/pushedimage1.jpg;
        http2_push /http2/pushedimage2.jpg;
        http2_push /http2/pushedimage3.jpg;
}

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

location / {
        ...
        http2_push /https://fearby.com/wp-includes/js/jquery/jquery.js;
        http2_push /wp-content/themes/news-pro/images/favicon.ico;
        ...
}

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

WordPress wp-config.php tweaks

# Memory
define('WP_MEMORY_LIMIT','1024M');
define('WP_MAX_MEMORY_LIMIT','1024M');
set_time_limit (60);

# Security
define( 'FORCE_SSL_ADMIN', true);

# Disable Updates
define( 'WP_AUTO_UPDATE_CORE', false );
define( 'AUTOMATIC_UPDATER_DISABLED', true );

# ewww.io
define( 'WP_AUTO_UPDATE_CORE', false );

Add 2FA Authentication to server logins.

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress too.

Tweaks Todo

  • Compress placeholder BJ Lazy Load Image (plugin is broken)
  • Solve 2x Google Analytics tracker redirects (done, switched to Matomo)

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

Results

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

n' read -rd '' -a ram_use_arr <<< "$ram_use" ram_use="${ram_use_arr[1]}" ram_use=$(echo "$ram_use" | tr -s " ") IFS=' ' read -ra ram_use_arr <<< "$ram_use" ram_total="${ram_use_arr[1]}" ram_used="${ram_use_arr[2]}" ram_free="${ram_use_arr[3]}" d=`date '+%Y-%m-%d %H:%M:%S'` if ! [[ "$ram_free" =~ ^[0-9]+$ ]]; then echo "Sorry ram_free is not an integer" else if [ "$ram_free" -lt "100" ]; then echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB) - Clearing Cache..." sync; echo 1 > /proc/sys/vm/drop_caches sync; echo 2 > /proc/sys/vm/drop_caches #sync; echo 3 > /proc/sys/vm/drop_caches #Not advised in production # Read for more info https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ exit 1 else if [ "$ram_free" -lt "256" ]; then echo "$d RAM ALMOST LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else if [ "$ram_free" -lt "512" ]; then echo "$d RAM OK (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 fi fi fi fi

I set the cronjob to run every 15 mins, I added this to my cronjob.

 

Sample log output

 

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

 

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

 

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may loose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

 

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

 

I added “writeback,noatime,nodiratime” flags to my disk options.

 

Updating Ubuntu Packages

Show updatable packages.

 

Update Packages.

 

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

 

Enable Unattended Upgrades.

 

Now I configure what packages not to auto update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

 

FYI: You can find installed packages by running this command:

 

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

 

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows to create democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepay model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

2FA Authentication at login

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress aswel.

Performance

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

Results

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

I hope this guide helps someone.

Free Credit

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.2 Converting to Blocks

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

Filed Under: CDN, Cloud, Cloudflare, Cost, CPanel, Digital Ocean, DNS, Domain, ExactDN, Firewall, Hosting, HTTPS, MySQL, MySQLGUI, NGINX, Performance, PHP, php72, Scalability, TLS, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: draft, GTetrix, host, IOPS, Load Time, maxIOPS, MySQL, nginx, Page Speed Insights, Performance, php, SSD, ubuntu, UpCloud, vm

How to backup and restore a MySQL database on Windows and Linux

April 21, 2019 by Simon

Why backup and restore

This is a quick guide demonstrating how you can backup and restore a MySQL database on Windows and Linux using Adminer.

You may need to know how to backup a restore a database for a number of reasons..

e.g

  • Send the database to someone to debug or give feedback while learning.
  • Move the database from a local machine to the cloud
  • Move the database from cloud vendor A to cloud vendor B
  • etc.

Having a backup of the VM is good but having a backup of the database too is better. I use UpCloud for hosting my VM’s and setting backups is easy. But I cannot download those backups.

UpCloud Backup Screen

Murphy’s Law

“If anything can go wrong, it will”

The most important reason for taking a backup and knowing how to restore it is for disaster recovery reasons.

Backup (the easiest way) with Adminer

Adminer is a free PHP based IDE for MySQL and other databases. Simply install Adminer and save the file on your local computer or remote web server directory.

FYI: The Adminer author Jakub Vrana has a patron page, I am a patron of this awesome software.

Snip from Adminers website. “Adminer (formerly phpMinAdmin) is a full-featured database management tool written in PHP. Conversely to phpMyAdmin, it consist of a single file ready to deploy to the target server. Adminer is available for MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, Firebird, SimpleDB, Elasticsearch andMongoDB.”

adminer.php file icon screenshot

TIP: The file would be publicly accessible to anyone so don’t save it to a common area, obfuscate the file, protect it of delete the file when you are done using it.

Once Adminer is installed load it in a web browser, login with your MySQL credentials. Once you login you will see all databases and an Import and Export menu.

Adminer main screen, all databases and import and export menu.

tbtest is a simple database with one table and 4 fields (ID, Key, Value and Modified)

.Click Export to open the export screen.

Export screen showing a list of databases and export options

Click Export, a SQL file will be generated (this is the export of the database).

Here is a save of the file:
https://fearby.com/wp-content/uploads/export.txt

Exported view of https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html

Its that simple.

If I add a binary blob file to the table and upload a PNG file lets see how the export looks.

Screenshot o the new table with a blog field in Adminer UI

Let export the database again in Adminer and check out the output. I used Sublime Text editor to view the export file.

New Export shows the binary file in the Backup SQL file

Restore (the easiest way) with Adminer

OK lets delete the tbtest database and then restore it with Adminer. I used Adminer to delete (DROP) the database.

Database dropped with Adminer

Database “dbtest” deleted.

Now lets create a blank database to restore to (same name).

Create database screen.

Database created.

dbtest created.

Now lets import the database backup using Adminer.

Click Import, select the backup file and un-tick Stop on errors.

Import screenshot, dxtest selectded, Restore file selected, stop on errors disabled

TIP: The 2MB next the the choose file button is defined by your web server and PHP configuration. If you are trying to import a larger database (e.g 80MB) first increase the limits in your web server and PHP (via php.ini).

The Import (restore should take seconds)

Import Success

The database was imported from a backup, all tables and records imported just fine.

The database was imported from a backup

Bonus methods.

On Ubuntu use this guide to backup from the command line. If you use the Oracle MySQL Workbench read this.

I hope this helps someone.

Filed Under: Adminer, Backup, Database, MySQL, Restore Tagged With: and, Backup, How, Linux, MySQL, on, restore, to, windows

Setting up the free MySQL database server on Windows 10

April 20, 2019 by Simon

This guide assumes you are a beginner using Windows 10 and maybe have a default website configured on Windows 10 using the built in Information Server (IIS) along with PHP (e.g PHP 7.3.4 or greater).

If you are have never used Internet Information Server (IIS) then XAMPP is a great way (single install) to setup a web server, database server and PHP with little fuss.

In this case I will manually install MySQL Community Server to add alongside Internet Information Server (IIS) on Windows 10.

Downloading
MySQL Server (Community Edition)

Go to here and click MySQL Installer for Windows.

Screenshot of Download installer link

Click MySQL Installer for Windows.

fyi: I sent myself on a goose chase as I could only see a 32 bit installer (I spent days trying to find a 64-bit installer or manual install binaries that were 64 bit). I should have read the bit that said “MySQL Installer is 32 bit, but will install both 32 bit and 64 bit binaries.“

MySQL Installer is 32 bit, but will install both 32 bit and 64 bit binaries.

You can read the installer documentation here if you wish.

I downloaded the larger of the two available installers (one 16MB the other 300+ MB, same version etc.). I had to login with an Oracle ID to start the download.

Download MySQL Installer screenshot

Install file downloaded

Installing MySQL Server (Community Edition)

I started the installer (accepted the licence agreement)

I accepted the licence agreement

I selected “Full Install“

I could have selected server only or custom.

I selected Full Install

I downloaded and installed Python 3.7, Thanks MySQL Installer for adding a link to the Python download.

I Installed Python 3.7

After Python was installed I clicked refresh in MySQL and now MySQL can see Python 3.7.

Now MySQL can see Python 3.7.

I had a Visual Studio plugin install error (because Visual Studio was not installed).

Visual Studio plugin install error (Visual Studio is not installed)

Full Install (all components selected) reported the items are ready for install.

Full Install (all components selected)

Installation status complete.

Installation status list (Full Install)

List of items to configure (post install)

List of items to configure.

I setup a standard MySQL Server (not a Cluster)

I setup a standard MySQL Server (not a Cluster)

I setup MySQL as a standard Development computer on port 3306 over TCP/IP (no Named Pipe, Shared Memory etc).

I setup MySQL as a standard Development computer on port 3306 over TCP/IP.

I enforced strong passwords.

I enforced strong passwords.

I set a root password and added few app usernames (with passwords).

I set a root password and a few app usernames (with passwords)

I named the MySQL Instance and set it to auto start when windows starts as a standard system account.

I named the MySQL Instance and set it to auto start when windows starts as a standard account.

Post installation in progress.

Installation in progress.

I accepted the defaults for the next screen (configuring routers). I tested the connection to MySQL Server.

Connect to MySQL server test screen.

Installation complete.

Installation complete screen.

MySQL WorkBench

I opened the MySQL Workbench software and viewed the server instance information. Nice.

MySQL Workbench Instance information

MySQL Workbench server performance graphs are nice.

MySQL Workbench performance graphs.

I am used to Adminer for managing MySQL on Linux so I install that now.

Install Adminer (formerly phpMinAdmin)

I prefer the PHP based Admirer database management tool from here.

https://adminer.net website  screenshot.

I downloaded a single PHP file and placed it in my IIS website root folder (as /adminer.php).

I tried to connect to my MySQL instance but received this error “The server requested authentication method unknown to the client”.

Unable to connect to MySQL with Adminer: Error "he server requested authentication method unknown to the client"

I googled and checked that I had a MySQL extension in php.ini (It did).

I opened MySQL Workbench and opened Manage Server Connections and located my my.ini file location (“C:\ProgramData\MySQL\MySQL Server 8.0\my.ini“). I opened my my.ini in a text editor and commented out the following line

#default_authentication_plugin=caching_sha2_password

and added

default_authentication_plugin= mysql_native_password

I saved the my.ini file, stopped and started the MySQL Service.

MySQL Workbench restarting database service UI screenshot.

I opened the MySQL Workbench and ran a query (File then “New Query Tab“) to set/reset the password.

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password
BY 'your_password_goes_here';  

Screenshot

Reset root password SQL statement "ALTER USER

I then tested logging in to MySQL with Adminer.

Adminer view showing default databases installed by MySQL.

Success, I can now created databases and tables.

Adminer create datbase table screenshot showing a table with 4 fields.

I hope this helps someone.

Filed Under: Database, MySQL, PHP Tagged With: MySQL, php, Setup

Setup a dedicated Debian subdomain (VM), Install MySQL 14 and connect to it from a WordPress on a different VM

July 21, 2018 by Simon

This is how I set up a dedicated Debian subdomain (VM), Installed MySQL 14 and connected to it from a WordPress installation on a different VM

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving fearby.com from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Fearby.com

I will be honest, fearby.com is my play server where I can code, learn about InfoSec and share (It’s also my stroke rehab blog).

There is no faster way to learn than actually doing. The problem is my “doing” usually breaks the live site from time to time (sorry).

I really need to set up a testing environment (DEV-TEST-LIVE or GREEN-BLUE) server(s). GREEN-BLUE has advantages as I can always have a hot spare ready. All I need to do is toggle DNS and I can set the GREEN or BLUE server as the live server.

But first  I need to separate my database from my current fearby.com server and setup up a new web server. Having a Green and Blue server that uses one database server will help with near real-time production website switches.

Dedicated Database Server

I read the following ( Should MySQL and Web Server share the same server? ) at Percona Database Performance Blog. Having a separate database server should not negatively impact performance (It may even help improve speeds).

Deploy a Debian VM (not Ubuntu)

I decided to set up a Debian server instead of Ubuntu (mostly because of the good focus on stability and focus on security within Debian).

I logged into the UpCloud dashboard on my mobile phone and deployed a Debian server in 5 mins.  I will be using my existing how to setup Ubuntu on UpCloud guide (even though this is Debian).

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

Deploy Debian Sevrer

Deploy a Debian server setup steps:

  1. Login to UpCloud and go to Create server.
  2. Name your Server (use a fully qualified domain name)
  3. Add a description.
  4. Choose your data centre (Chicago for me)
  5. Choose the server specs (1x CPU, 50GB Disk, 2GB Memory, 2TB Traffic for me)
  6. Name the Primary disk
  7. Choose an operating system (Debian for me)
  8. Select an SSH Key
  9. Choose misc settings
  10. Click Deploy server

After 5 mins your server should be deployed.

After Deploy

Setup DNS

Login to your DNS provider and create DNS records to the new IP’s (IPv4 and IPv6) provided by UpCloud. It took DNS 12 hours to replicate to my in Australia.

Add a DNS record with your domain registra A NAMe = IPV4 and AAAA Name = IPv6

Setup a Firewall (at UpCloud)

I would recommend you set up a firewall at UpCloud as soon as possible (don’t forget to add the recommended UpCloud DNS IP’s and any whitelisted IP’s your firewall).

Block everything and only allow

  • Port 22: Allow known IP(s) of your ISP or VPN.
  • Port 53: Allow known UpCloud DNS servers
  • Port 80 (ALL)
  • Port 443 (ALL)
  • Port 3306 Allow your WordPress site and known IP(s) of your ISP or VPN.

Read my post on setting up a whitelisted IP on an UpCloud VM… as it is a good idea.

UpCloud thankfully has a copy firewall feature that is very handy.

Copy Firewall rules option at UpCloud

After I set up the firewall I SSH’ed into my server (I use vSSH on OSX buy you could use PUTTY).

I updated the Debian system with the following  command

sudo apt update

Get the MySQL Package

Visit http://repo.mysql.com/ and get the URL of the latest apt-config repo deb file (e.g “mysql-apt-config_0.8.9-1_all.deb”). Make a temp folder.

mkdir /temp
cd /temp

Download the MySQL deb Package

wget http://repo.mysql.com/mysql-apt-config_0.8.9-1_all.deb

Install the package

sudo dpkg -i mysql-apt-config_0.8.9-1_all.deb

Update the system again

sudo apt update

Install MySQL on Debian

sudo apt install mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server psmisc
The following NEW packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server mysql-server psmisc
0 upgraded, 9 newly installed, 0 to remove and 1 not upgraded.
Need to get 37.1 MB of archives.
After this operation, 256 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-client amd64 5.7.22-1debian9 [8886 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 mysql-common all 5.8+1.0.2 [5608 B]
Get:3 http://deb.debian.org/debian stretch/main amd64 libaio1 amd64 0.3.110-3 [9412 B]
Get:4 http://deb.debian.org/debian stretch/main amd64 libatomic1 amd64 6.3.0-18+deb9u1 [8966 B]
Get:5 http://deb.debian.org/debian stretch/main amd64 psmisc amd64 22.21-2.1+b2 [123 kB]
Get:6 http://deb.debian.org/debian stretch/main amd64 libmecab2 amd64 0.996-3.1 [256 kB]
Get:7 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-client amd64 5.7.22-1debian9 [12.4 kB]
Get:8 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-server amd64 5.7.22-1debian9 [27.8 MB]
Get:9 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-server amd64 5.7.22-1debian9 [12.4 kB]
Fetched 37.1 MB in 12s (3023 kB/s)
Preconfiguring packages ...
Selecting previously unselected package mysql-common.
(Reading database ... 34750 files and directories currently installed.)
Preparing to unpack .../0-mysql-common_5.8+1.0.2_all.deb ...
Unpacking mysql-common (5.8+1.0.2) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../1-libaio1_0.3.110-3_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-3) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../2-libatomic1_6.3.0-18+deb9u1_amd64.deb ...
Unpacking libatomic1:amd64 (6.3.0-18+deb9u1) ...
Selecting previously unselected package mysql-community-client.
Preparing to unpack .../3-mysql-community-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-client (5.7.22-1debian9) ...
Selecting previously unselected package mysql-client.
Preparing to unpack .../4-mysql-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-client (5.7.22-1debian9) ...
Selecting previously unselected package psmisc.
Preparing to unpack .../5-psmisc_22.21-2.1+b2_amd64.deb ...
Unpacking psmisc (22.21-2.1+b2) ...
Selecting previously unselected package libmecab2:amd64.
Preparing to unpack .../6-libmecab2_0.996-3.1_amd64.deb ...
Unpacking libmecab2:amd64 (0.996-3.1) ...
Selecting previously unselected package mysql-community-server.
Preparing to unpack .../7-mysql-community-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-server (5.7.22-1debian9) ...
Selecting previously unselected package mysql-server.
Preparing to unpack .../8-mysql-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-server (5.7.22-1debian9) ...
Setting up libatomic1:amd64 (6.3.0-18+deb9u1) ...
Setting up psmisc (22.21-2.1+b2) ...
Setting up mysql-common (5.8+1.0.2) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libmecab2:amd64 (0.996-3.1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up libaio1:amd64 (0.3.110-3) ...
Processing triggers for systemd (232-25+deb9u4) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up mysql-community-client (5.7.22-1debian9) ...
Setting up mysql-client (5.7.22-1debian9) ...
Setting up mysql-community-server (5.7.22-1debian9) ...
update-alternatives: using /etc/mysql/mysql.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Created symlink /etc/systemd/system/multi-user.target.wants/mysql.service -> /lib/systemd/system/mysql.service.
Setting up mysql-server (5.7.22-1debian9) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for systemd (232-25+deb9u4) ...

Secure MySQL

You can secure the MySQL server deployment (set options as needed)

sudo mysql_secure_installation

Enter password for user root:
********************************************
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: No
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : No

... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : Yes
Success.

Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : No

... skipping.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Yes
- Dropping test database...
Success.

- Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Yes
Success.

All done!

Install NGINX

I installed NGINX to allow Adminer MySQL GUI to be used

I ran these commands to install NGINX.

sudo apt update
sudo apt upgrade
sudo apt-get install nginx

I edited my NGINX configuration as required.

  • Set a web server root
  • Set desired headers
  • Optimized NGINX (see past guides here, here and here)

I reloaded NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Install PHP

I followed this guide to install PHP on Debian.

sudo apt update
sudo apt upgrade

sudo apt install ca-certificates apt-transport-https
wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -
echo "deb https://packages.sury.org/php/ stretch main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update
sudo apt install php7.2
sudo apt install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml php7.2-cli php7.2-common

Install PHP FPM

apt-get install php7.2-fpm

Increase Upload Limits

You may need to temporarily increase upload limits in NGINX and PHP before you can restore a WordPress database. My feabry.com blog is about 87MB.

Add “client_max_body_size 100M;” to “/etc/nginx/nginx.conf”

Add the following to “/etc/php/7.2/fpm/php.ini”

  • post_max_size = 100M
  • upload_max_filesize = 100M

Restore a backup of my MySQL database in MySQL

You can now use Adminer to restore your blog to MySQL. Read my post here on Adminer here. I used Adminer to move my WordPress site from CPanel to a self-managed server a year ago.

First login to your source server and export your desired database then login to the target server and import the database.

Firewall Check

Don’t forget to allow your WordPress site’s 2x Public IP’s and 1x Private IP to access port 3306 in your UpCloud Firewall.

How to check open ports on your current server

sudo netstat -plunt

Set MySQL Permissions

Open MySQL

mysql --host=localhost --user=root --password=***************************************************************************

I ran these statements to grant the user logging in on the nominate IP’s access to MySQL.

mysql>
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';

Reload permissions in MySQL

FLUSH PRIVILEGES;

Allow access to the Debian machine from known IP’s

Edit “/etc/host.allow”

Additions (known safe IP’s that need access to this MySQL remotely).

mysqld : IPv4Server1PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow
mysqld : IPv4Server2PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow

mysqld : ALL : deny

Tell MySQL to listen on

Edit “/etc/mysql/my.cnf”

Added..

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
language = /usr/share/mysql/English
bind-address = DebianServersIntenalIPv4Address

I guess you could change the port to something random???

Restart MySQL

sudo service mysql restart

Install a second local firewall on Debian

Install ufw

sudo apt-get instal ufw

Do add the IP of your desired server or VPN to access SSH

sudo ufw allow from 123.123.123.123 to any port 22

Do add the IP of your desired server or VPN to access WWW

sudo ufw allow from 123.123.123.123 to any port 80

Now add the known IP’s (e.g any web servers public (IPv4/IPv6) or Private IP’s) that you wish to grant access to MySQL (e.g the source website that used to have MySQL)

sudo ufw allow from 123.123.123.123 to any port 3306

Do add UpCloud DNS Servers to your firewall

sudo ufw allow from 94.237.127.9 to any port 53
sudo ufw allow from 94.237.40.9 to any port 53
sudo ufw allow from 2a04:3544:53::1 to any port 53
sudo ufw allow from 2a04:3540:53::1 to any port 53

Add all other rules as needed (if you stuff up and lock your self out you can login to the server with the Console on UpCloud)

Restart the ufw firewall

sudo ufw disable
sudo ufw enable

Prevent MySQL starting on the source server

Now we can shut down MySQL on the source server (leave it there just in case).

Edit “/etc/init/mysql.conf”

Comment out the line that contains “start on ” and save the file

and run

sudo systemctl disable mysql

Reboot

shutdown -r now

Stop and Disable NGINX on the new DB server

We don’t need NGINX running now the database has been imported with Adminer.

Stop and prevent NGINX from starting up on startup.

/etc/init.d/nginx stop
sudo update-rc.d -f nginx disable
sudo systemctl disable nginx

Check to see if MySQL is Disabled

service mysql status
* mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; disabled; vendor preset: enabled)
Active: inactive (dead)

Yep

Test access to the database server in PHP code

Add to dbtest.php

<em>SELECT guid FROM wp_posts</em>()<br />
<ul><?php

//External IP (charged after quota hit)
//$servername = 'db.yourserver.com';

//Private IP (free)
//$servername = '10.x.x.x';

$username = 'username';
$password = '********your*password*********';
$dbname = 'database';

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}

$sql = 'SELECT guid FROM wp_posts';
$result = $conn->query($sql);

if ($result->num_rows > 0) {
    // output data of each row
    while($row = $result->fetch_assoc()) {
        echo $row["guid"] . "<br>";
    }
} else {
    echo "0 results";
}
$conn->close();
?></ul>
Done

Check for open ports.

You can install nmap on another server and scan for open ports

Install nmap

sudo apt-get install nmap

Scan a server for open ports with nmap

You should see this on a server that has access to see port 3306 (port 3306 should not be visible by non-whitelisted IP’s).  Port 3shouldoudl not be seen via everyone.

sudo nmap -PN db.yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:15 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
3306/tcp open  mysql

You should see something like this on a server that has access to see port 80/443 (a web server)

sudo nmap -PN yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:18 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
80/tcp   open  http
443/tcp   open  https

I’d recommend you use a service like https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap# to check for open ports.  https://hackertarget.com/tcp-port-scan/ is a great tool too.

https://www.infobyip.com/tcpportchecker.php is also a free port checker that you can use to verify individual closed ports.

Screeshot of https://www.infobyip.com/tcpportchecker.php

Hardening MySQL and Debian

Read: https://www.debian.org/doc/manuals/securing-debian-howto/ap-checklist.en.html

Configuring WordPress use the dedicated Debian VM

On the source server that used to have MySQL edit your wp-config.php file for WordPress.

Remove

define('DB_HOST', 'localhost');

add (read the update below, I changed the DNS IP to the Private IP to have free traffic)

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

Restart NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Restart PHP-FPM

service php7.2-fpm restart

Conclusion

Nice, I seem to have shaved off 0.3 seconds in load times (25% improvement)

1sec gtmtrix load time

Update: Using a Private IP or Public IP between WordPress and MySQL servers

After I released this blog post (version 1.0 with no help from UpCloud) UpCloud contacted me and said the following.

Hello Simon,

I notice there's no mention of using the private network IPs. Did you know that we automagically assign you one when you deploy with our templates. The private network works out of the box without additional configuration, you can use that communicate between your own cloud servers and even across datacentres.

There's no bandwidth charge when communicating over private network, they do not go through public internet as well. With this, you can easily build high redundant setups.

Let me know if you have any other questions.

--
Kelvin from UpCloud

I will have updated my references in this post and replace the public IP address (that is linked to DNS record for db.fearby.com) and instead use the private ip address (e.g 10.x.x.x), your servers private IP address is listed against the public IPv$ and IPv6 address.

I checked that the local ufw firewall did indeed allow the private IP access to MySQL.

sudo ufw status numbered |grep 10.x.x.x
[27] 3306                       ALLOW IN    10.x.x.x

On my new Debian MySQL server, I edited the file /etc/mysql/my.cnf and changed the IP to the private IP and not the public IP.

Now it looked like

[mysqld]
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
language        = /usr/share/mysql/English
bind-address    = 10.x.x.x

(10.x.x.x  my Debian servers private IP)

On my WordPress instance, I edited the file  /www-root/wp-config.php

I added the new private host

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

(10.x.x.x  my Debian servers private IP)

Alos on Debian/MySQL ensure you have granted access to the private IP of the WordPress server

Edit /etc/host.allow

Add

mysqld : 10.x.x.x : allow

Restart MySQL

sudo systemctl restart mysql

TIP: Enable UpCloud Backups

Do setup automatic backups (and or take manual backups). Backups are an extra charge but are essential IMHO.

UpCloud backups

Troubleshooting

If you can’t access MySQL log back into MySQL

mysql --host=localhost --user=root --password=***************************************************************************

and run

GRANT ALL PRIVILEGES ON *.* TO [email protected]'%' IDENTIFIED BY '***********sql*user*password*************''; FLUSH PRIVILEGES;

Reboot

Lower Upload Limits

Don’t forget to lower file upload sizes in NGINX and PHP (e.g 2M) now that the database has been restored.

I hope this guide helps someone.

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.6 Changed Public IP use to private IP use to ensure we are not charged when the serves sage goes over the quota

v1.5 Fixed 03 type (should have been 0.3)

v1.4 added disable nginx info

v1.3 added https://www.infobyip.com/tcpportchecker.php

v1.1 added https://hackertarget.com/tcp-port-scan/

v1.0 Initial Post

Filed Under: Debian, MySQL, VM, Wordpress Tagged With: 14, a, and, Connect, debian, dedicated, different, from, install, MySQL, Setup, Subdomain, to, vm, wordpress

Using OWASP ZAP GUI to scan your Applications for security issues

March 17, 2018 by Simon

OWASP is a non-profit that lists the Top Ten Most Critical Web Application Security Risks, they also have a GUI Java tool called OWASP Zap that you can use to check your apps for security issue.

I have a number of guides on moving hosting away form CPanel , Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. It is important that you always update your site and software and test your sites and software for vulnerabilities. Zap is free and completely open source.

Disclaimer, I am not an expert (this Zap post and my past Kali Linux guide will be updated as I learn more).

OWASP Top 10

OWASP has a top 10 list of things to review.

OWASP Top 10

Download the OWASP 10 10 Application security risks PDF here form here.

Using the free OWASP Zap Tool

Snip from https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project

“The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. It’s also a great tool for experienced pentesters to use for manual security testing.”

Zap Overview

Here is a quick demo of Zap in action.

Do check out the official Zap videos on youtube: https://www.youtube.com/user/OWASPGLOBAL/videos if you want to learn more.

Installing Zap

Download Zap from here.

Download Zap

Download Options

Download

Download contents

Run Install

Copy to the app to the OSX Application folder

Installing

App Installed

App Insatalled

Open OSX’s Privacy and Security screen and click Open Anyway

Open Anwway

OWASP Zap is now Installed

Insallled

Ready for a Scan

Blind Scan

But before we do let’s check out the Options

Options

OWASP Zap allows you to label reports to ad from anyone you want.

Report Label Options

Now let’s update the program and plugins, Click Manage Add-ons

Manage Adons

Click Update All to Update addons

Updates

I clicked Update All

Plugins

Installed some plugins

Marketplace

Zap is Ready

Zap

Add a site and right click on the site and you can perform an active scan or port scan.

Right click Zap

First Scan (https failed)

https failed

I enabled unsafe SSL/TLS Renegotiation.

Allow Unsafe HTTPS

This did not work and this guide said I needed to install the “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files” from here.

Cryptography Files OSX

The extract files to /Library/Java/JavaVirtualMachines/%your_jdk%/Contents/Home/jre/lib/security

Extract

I restarted OWASP Zap and tried to scan my site buy it appears Cloudflare (that I recently set up) was blocking my scans and reported error 403. I decided to scan another site of mine that was not on Cloudflare but had the same Lets Encrypt style SSL cert.

fyi: I own and set up the site I queried below.

Zap Results

OWASP Zap scan performed over 800 requests and tried traversal exploits and many other checks. Do repair any major failures you find.

Zan Scan

Generating a Report

To generate a report click Report then the appropriate generation menu of choice.

Generate Report

FYI: The High Priority Alert is a false positive with an HTML item being mistaken for a CC number.

I hope this guide helps someone. Happy software/server hardening and good luck.

More Reading

Check out my Kali Linux guide.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.3 fixed hasting typo.

v1.2 False Positive

v1.1 updated main features

v1.0 Initial post

Filed Under: Cloud, Cloudflare, Code, DNS, Exploit, Firewall, LetsEncrypt, MySQL, owasp, Secure, Security, ssl, Ubuntu Tagged With: Applications, for, gui, issues, OWASP, scan, security, to, Using, your, ZAP

Infographic: So you have an idea for an app

October 31, 2017 by Simon

I created this graphic as I was asked by multiple people how to develop an app. This does not include tips on coding but many people with the non-technical prerequisites to building an app.

I hope this graphic helps someone (It’s my first infographic/decision flow image, feedback welcome).

So You Have an Idea For An App: Graphic

Click for a larger version.

Infographic-So-you-have-an-idea-for-an-app-v1-3

Standalone Image URL’s

v1.3 (22nd November 2017)
  https://fearby.com/wp-content/uploads/2017/10/Infographic-So-you-have-an-idea-for-an-app-v1-3.jpg
v1.2 (4th Nov 2017, Added requirements and MoSCoW): 
  https://fearby.com/wp-content/uploads/2017/10/Infographic-So-you-have-an-idea-for-an-app-v1-2.jpg
v1.1 (1st Nov 2017, Fixed Typos): 
  https://fearby.com/wp-content/uploads/2017/10/Infographic-So-you-have-an-idea-for-an-app-v1-1.jpg

todo: Things to add Issues to fix in 1.4:
 - Add user personas and Epic, Story and Task stages.
 - How to capture good stories (and validated ideas (landing pages/interviews/problems/value/painpoints)

Define the problem(s) (pain points)

Before you start coding, do list your app requirements (problem’s to solve (pain points)).

Atlassian JIRA or Trello can help with this. I personally use (and like) Atlaz.io (now Hygger), I reviewed the BETA here).

Using Trello lists are also a simple way to capture tasks/ideas.

ListMore on these Read more here also read my Atlaz.io BETA Preview here.

Nothing beats pen and paper too.

Notepad

Moscow Prioritization

Must-Have Should-Have, Could-Have and Won’t-have are buckets you should sort ideas into. If you have trouble moving items away from Must to Should, Could or Won’t then assign a fictitious monetary value to spend on each item and that will help you decide what is more important.

Read this MoSCoW Method article at Wikipedia: https://en.wikipedia.org/wiki/MoSCoW_method

Managing MoSCoW tasks on paper is OK if you do not want to use planning software.

More

Read my guide on how to prototype apps with Adobe XD guide here.  You can also Prototype a Web app with Platforma (review here).

Read my post on how to develop software and stay on track.

Research

Do research your idea for market fit/need, competition, complexity, legal and validate ideas early. It’s best to find out early that Google will quote $60,000+ TAX a year to allow you to use Google map’s in your app early, then you can use https://www.mapbox.com for $499 a year.

Do you have competition?

Some people say “don’t develop an app that already exists”. Why would you develop a new Uber app? Henry Ford did make a new transportation mode when people were happy with horses, other car manufacturers like Tesla are moving in on the space so don’t be discouraged.

Landing Page

A landing page with a signup form (Newsletter and Register Interest) form is a good way to validate ideas and get feedback early (I would suggest you use a free Mainchimp signup form, a generated website with Platforma on a $5/m server for quick results). There is no point coding and launching to crickets.

Do you have an app Prototype or Mock-Up?

This is very important and easy step.  Programs like Adobe XD CC  (read my guide here) and Balsamiq can help you prototype an app, Platforma can help you prototype web apps.

Wire up a prototype

Drag and Drop

Have you validated your idea (app) with end-users?

If you don’t do this you are mad.  Watch this video to see lessons learned from Trades Cloud.

Is this app idea a hobby (passion)?

This can help you limit costs and expectations.  Cheap serves exist (read here and here).

Do you have time to develop/manage this?

Developing and managing an app and planning (paying for) development cycle can be time-consuming and mentally draining.

Can you code?

Do you need to hire developers or learn to code?  Blog post coming soon on how to hire coders.

Do you have funds?

Having funds on hand to set up and build an app is very important.

Do you want to hide developers (or get Venture Capital)?

This can help you get moving but you will have to give away a slice of the profits and or IP, managing mentors and VC’s can be tiresome.

Have you set failure criteria (post-mortem)?

Read this page on lessons learned from over 200 startup failures, save your favourites.  Having realistic goals and limits is a wise idea, do stop when you reach preset limits.

Do you have a business case?

There is plenty of business case generator template’s,  you will want to document some of the following.

  • What is your apps Purpose – App X will be..
  • What is your Mission Statement – App X will..
  • Who are your Target Customers – Retail..
  • Who are the Early Adopters – Retail..
  • What Problems does your app solve – App X will..
  • What Milestones will your app go through – iOS, Android, Apple TV, Web etc..
  • What Existing solutions exist – App: A, B and C..
  • How does your app Solve your customer’s problems (pain points) – App X will..
  • How will your app Find customers – Word of Mouth, Referrals, Advertisements?
  • What is your Revenue model – Sales, Ad’s, Subscriptions?
  • What is your apps Goal statement – App X will hit X users in X?
  • What are your apps Failure points – If app X does not reach X or monthly costs reach Y….
  • What is your Marketing message – App X will..
  • What is your apps Metrics – iOS, Android, Apple TV apps..
  • What is your Unfair Advantage – Why will you succeed over others?

Are you using a project management methodology?

Proven Methodology can help you develop software and stay on track, software like Atlaz, JIRA or Trello are highly recommended tools. Capturing ideas and processing feedback in tools is very important.

Before you code (or hire coders) use source code versioning software like GitHub and Bitbucket (guides here and here).  You want to retain the code and insist on owning it.

Product Goal

Simon Sinek has a good video on companies (or Products) being in a finite or infinite game.

Are you in full control of your development stack?

If you are not a developer you may not care if you are in control, but you will if there are issues with hired developers or issues with service providers.  I moved from CPanel to self-managed servers, moved from IBM Cloudant to Digital Ocean to AWS then Vultr servers where I can have full control or scalability, features, security and costs.

Can you forecast the costs?

Lowering cost and boosting performance is important and having spare money is a good thing.

I read recently that  Telsla is burning through $6,000 a minute and is forecast to need something like 2 billion dollars in the next 2 years. Software as Service platforms will drain your budget quick (they do take on some risk and maintenance tasks), is this worth it?

Mark Fedin (CEO and Co-founder at Atlaz) has a great post on the topic of viability Stop Dabbling At Startups .

Are you using the right tech?

Don’t be afraid of changing tech along the way, you may start with MySQL and move to MongoDB, Redis, Oracle ot MSSQL database servers etc.

Do you have systems to capture customer feedback?

Self-explanatory, you are solving customer problems, right? You will pivot in the first year (trust me).

What is your revenue/sales model?

If you don’t know how to make money then don’t make an app (apps are expensive to code and maintain).

Are you prioritizing task?

I have blogged about this before, do use the tools to stay on track.

Funny Bit

Project Mangement LolProject Mangement Lol

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.5 Fixed typos and fixed CDN link issue.

v1.4 Updated the graphic to version v1.3.

Short (Article): https://fearby.com/go2/so/

Short (Image): https://fearby.com/go2/so-img/

Filed Under: Advice, Android, App, Atlassian, AWS, Cost, Development, Digital Ocean, Feedback, Git, GitHub, JIRA, Marketing, MongoDB, MySQL, Project Management, Redis, Scalable, Software, Tech Advice, Trello, VM, Vultr Tagged With: an, app, for, have, idea, Infographic, So, you

Wordfence Security Plugin for WordPress

October 10, 2017 by Simon

WordFence is a great security plugin for WordPress that allows you to secure your WordPress installation and prevent brute force attacks, rate-limit visitors (or Bots), block banned IP’s that are accessing your site and more.

Fyi

20th Dec 2017: Wordfence report Backdoor in Captcha Plugin Affects 300K WordPress Sites

Backup

Before I started I performed a quick mysql backup from the command line to ensure my WordPress is backed up. Read my guide on installing WordPress from the command line (here) and securin Ubuntu (here).

/usr/bin/mysqldump --all-databases > /mysql-database-dump-prewordfence.sql -u 'user' -p'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

Don’t forget to backup your WordPress files.

sudo cp -R /www-root/* /backup/www-backup/

Download and extract WordFence

I downloaded and installed the Wordfence plugin via command line. I visited https://wordpress.org/plugins/wordfence/ and got the plugin URL for the latest version (e.g https://downloads.wordpress.org/plugin/wordfence.6.3.19.zip).

I downloaded the plugin zip file from the command line to my WordPress plugins folder. Read my guide (here) on managing WordPress from the command line.

cd /www-root/wp-content/plugins/
sudo wget https://downloads.wordpress.org/plugin/wordfence.6.3.19.zip
sudo unzip /www-root/wp-content/plugins/wordfence.6.3.19.zip
rm -R /www-root/wp-content/plugins/*.zip

Now the Wordfence plugin can be activated in WordPress.

Activate Wordfence

Enter your email to receive Wordfence alerts.

WordFence EmmiL Alerts

Your Wordfence Dashboard will show local and global issues and statistics.

WordFence Dashboard

I set these default Wordfence options.

ON

More Wordfence Options

More Options

Set Permissions (for Firewall)

You may need to create a log folder (e.g /www/wp-content/wflogs/)  and set permissions to allow Wordfence to work.

cd /www-root/wp-content/
mkdir wflogs
sudo chmod -R 777 /www-root/wp-content/wflogs/

Now I can enable the Wordfence firewall via the WordFence plugin at /wp-admin/admin.php?page=WordfenceWAF

Wordfence Firewall

Don’t forget to configure the Wordfence firewall.

WordFence Firewall

Firewall Install Options

I do not have FTP setup so I’ll do a manual install based on these instructions.

WordPress Install Options

I manually added this to my ~/nxinx/sites-available/default config.

I added this to my nginx config.

location ~ ^/\.user\.ini {
    deny all;
}

This did not work as specified in the official Wordfence docs (https://docs.wordfence.com/en/Web_Application_Firewall_FAQ#NGINX) so I added the following.

location ~ (\.ini) {
    return 403;
}

Accessing a test /test.user.ini file in a web browser returns a 403  (always test access)

403 Forbidden

nginx

I added this to my active php.ini configuration file.

auto_prepend_file = '/www-root/wordfence-waf.php'

I restart PHP.

sudo systemctl restart php7.0-fpm

I added my IP to the Wordfence whitelist textbox to ensure I am not blocked: /wp-admin/admin.php?page=WordfenceSecOpt 

Tip: Grab your IPV4 address from https://ipv4.icanhazip.com/

Recent Wordfence Scan Summary (1st Scan)

Wordfence Dashboard allows you to see local and global stats.

Recent Scan

Wordfence (In Progress) Scan summary.

Scan Summary

My Issues

Wordfence alerted me that I needed to update WordPress and some plugins  (see my guide on installing and managing your WordPress via the Command Line here).

I updated my WordPress core files (via the command line).

sudo wp core update
> Success: WordPress is up to date.

I updated my  WordPress plugins (via the command line).

sudo wp plugin update --all

Output:

>Enabling Maintenance mode...
>Downloading update from https://downloads.wordpress.org/plugin/>add-to-any.1.7.19.zip...
>Unpacking the update...
>Installing the latest version...
>Removing the old version of the plugin...
>Plugin updated successfully.
>Downloading update from https://downloads.wordpress.org/plugin/>display-posts-shortcode.2.9.0.zip...
>Unpacking the update...
>Installing the latest version...
>Removing the old version of the plugin...
>Plugin updated successfully.
>Downloading update from https://downloads.wordpress.org/plugin/>wordpress-seo.5.5.1.zip...
>Unpacking the update...
>Installing the latest version...
>Removing the old version of the plugin...
>Plugin updated successfully.
>Disabling Maintenance mode...
>+-------------------------+-------------+-------------+---------+
>| name                    | old_version | new_version | status  |
>+-------------------------+-------------+-------------+---------+
>| add-to-any              | 1.7.17      | 1.7.19      | Updated |
>| display-posts-shortcode | 2.8.0       | 2.9.0       | Updated |
>| wordpress-seo           | 5.4.2       | 5.5.1       | Updated |
>+-------------------------+-------------+-------------+---------+
>Success: Updated 3 of 3 plugins.

I manually updated my WordPress theme (from my.studiopress.com website) and uploaded via SSH

 scp ~/Downloads/genesis.2.5.3.zip [email protected]:/www-root/wp-content/themes/genesis.2.5.3.zip

I could then SSH into my server and extract the theme.

cd /www-root/wp-content/themes/
unzip genesis.2.5.3.zip
rm -R genesis.2.5.3.zip

Wordfence Dashboard

Wordfence allows you to see worldwide Blocked IP’s by the Wordfence network.

IPs

You can also see local successful or failed login attempts. The Ukraine IP 91.200.12.49 tried to log in to my WordPress installation but was banned globally as it was seen unsuccessfully logging into 900 other global servers, good work Wordfence.

Failed Logins

Attacks blocked locally.

Stats

View global WordPress attacks by countries

Global Attack Stats

Wordfence Features I like

  • Finding abandoned plugins
  • See Globally banned IP’s
  • See local failed login attempts
  • Brute force protection.
  • Stats on local blocked events.
  • Identification of old files.
  • Simple reports.

Wordfence Features I don’t like

  • Your mouse must be active in the window for scans to complete/seen.
  • Setup firewall almost requires FTP.

Wordfence: 7.02 updated (listed here)

Revised Dashboard looks nice

Wordfence 702

More to come, I will update this guide over time.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]Revision Historyv1.2 added info on Wordfence 7.0.2

v1.1 added info on Captcha plugin backdoor detected by Wordfence

v1.0 Initial Post

etc

Filed Under: Cloud, DB, Firewall, Malware, MySQL, Security, VM, Vultr, Wordpress Tagged With: plugin, security, Wordfence, wordpress

How to backup an Ubuntu VM in the cloud via crontab entries that trigger Bash Scripts, SSH, rsync and email backup alerts

August 20, 2017 by Simon

Here is how I backup a number of Ubuntu servers with crontab entries, bash scripts and rsync and send backup email.

Read more on useful terminal commands here for as low as $2.5 a month. Read on setting up a Digital Ocean Ubuntu server here for as low as $5 a month here ($10 free credit). Read more on setting up an AWS Ubuntu server here.

I have  6 numbered scripts in my scripts folder that handle backups, I call these scripts at set times via the crontab list.

fyi: Paths below have been changed for the purpose of this post (security).

1 1 * * * /bin/bash /scripts-folder/0.backupfiles.sh >> /backup-folder/0.backupfiles.log
3 1 * * * /bin/bash /scripts-folder/1.backupdbs.sh >> /backup-folder/1.backupdbs.log
5 1 * * * /bin/bash /scripts-folder/2.shrinkmysql.sh >> /backup-folder/2.shrinkmysql.log
10 1 * * * /bin/bash /scripts-folder/3.addtobackuplog.sh >> /backup-folder/3.addtobackuplog.log
11 1 * * * /bin/bash /scripts-folder/4.syncfiles.sh >> /backup-folder/4.syncfiles.log
15 1 * * * /bin/bash /scripts-folder/5.sendbackupemail.sh > /dev/null 2>&1

https://crontab.guru/ is great for specifying times to run jobs on each server (I backup one server at 1 AM,, another at 2 AM etc (never at the same time))

Bring up your crontab list

crontab -e

Check out the Crontab schedule generator here.

Below is the contents of my /scripts/0.backupfiles.sh (sensitive information removed).

I use this script to backup folders and configuration data

cat /scripts-folder/0.backupfiles.sh
#!/bin/bash

echo "Deleting old NGINX config..";
rm /backup-folder/config-nginx.zip

echo "Backing Up NGNIX..";
zip -r -9 /backup-folder/config-nginx.zip /etc/nginx/ -x "*.tmp" -x "*.temp" -x"./backup-folder/*.bak" -x "./backup-folder/*.zip"

echo "Deleting old www backup(s) ..";
#rm /backup-folder/www.zip
echo "Removing old www backup folder";
rm -R /backup-folder/www
echo "Making new backup folder at /backup-folder/www/";
mkdir /backup-folder/www

echo "Copying /www/ to /backup-folder/www/";
cp -rTv /www/ /backup-folder/www/
echo "Done copying /www/ to /backup-folder/www/";

Below is the contents of my /scripts-folder/1.backupdbs.sh (sensitive information removed).

I use this script to dump my MySQL database.

cat /scripts-folder/1.backupdbs.sh
#!/bin/bash

echo "$(date) 1.backupdbs.sh ...." >> /backup-folder/backup.log

echo "Removing old SQL backup..":
rm /backup-folder/mysql/database-dump.sql

echo "Backing up SQL";
/usr/bin/mysqldump --all-databases > /backup-folder/mysql/database-dump.sql -u 'mysqluser' -p'[email protected]$word'

echo "Done backing up the database";

Below is the contents of my /scripts-folder/2.shrinkmysql.sh (sensitive information removed).

I use this script to tar my SQL dumps as these files can be quite big

cat /scripts-folder/2.shrinkmysql.sh
#!/bin/bash

echo "$(date) 2.shrinkmysql.sh ...." >> /backup-folder/backup.log

echo "Backing up MySQL dump..";
tar -zcf /backup-folder/mysql.tgz /backup-folder/mysql/

echo "Removing old MySQL dump..";
rm /backup-folder/mysql/*.sql

Below is the contents of my /scripts-folder/3.addtobackuplog.sh (sensitive information removed).

This script is handy for dumping extra information.

cat /scripts-folder/3.addtobackuplog.sh
#!/bin/bash

echo "$(date) 3.addtobackuplog.sh ...." >> /backup-folder/backup.log

echo "Server Name.." >> /backup-folder/backup.log
grep "server_name" /etc/nginx/sites-available/default

echo "$(date) Timec" >> /backup-folder/backup.log
sudo hwclock --show  >> /backup-folder/backup.log

echo "$(date) Uptime, Load etc" >> /backup-folder/backup.log
w -i >> /backup-folder/backup.log

echo "$(date) Memory" >> /backup-folder/backup.log
free  >> /backup-folder/backup.log

echo "$(date) Disk Space" >> /backup-folder/backup.log
pydf >> /backup-folder/backup.log

echo "Firewall" >> /backup-folder/backup.log
ufw status >> /backup-folder/backup.log

echo "Adding to Backup Log file..";
echo "$(date) Nightly MySQL Backup Successful....." >> /backup-folder/backup.log

Below is the contents of my /scripts-folder/4.syncfiles.sh (sensitive information removed).

This script is the workhorse routine that rsyncs files to the source to the backup server (a dedicated Vulr server with an A Name record attaching the server to my domain).

I installed sshpass to pass in the ssh user password (after ssh is connected (authorized_keys set), I tried to setup a rsync daemon but had no luck).  I ensured appropriate ports were opened on the source (OUT 22, 873) and backup server (IN 22 873).

cat /scripts-folder/4.syncfiles.sh
#!/bin/bash

echo "$(date) 4.syncfiles.sh ...." >> /backup-folder/backup.log
echo "Syncing Files.";

sudo sshpass -p 'Y0urW0rkingSSHR00tPa$0ord' rsync -a -e  'ssh -p 22 ' --progress -P /backup-folder backup-server.yourdomain.com:/backup-folder/1.www.server01.com/

ufw firewall has great rules for allowing certain IP’s to talk on ports.

Set Outbound firewall rules (to certain IP’s)

sudo ufw allow from 123.123.123.123 to any port 22

Change 123.123.123.123 to your backup server.

Set Inbound firewall rules (to certain IP’s)

sudo ufw allow out from 123.123.123.123 to any port 22

Change 123.123.123.123 to your sending server.

You can and should setup rate limits on IP’s hitting certain ports.

udo ufw limit 22 comment 'Rate limit for this port has been reached'

Install Fail2Ban to automatically ban certain users. Fail2Ban reads log file that contains password failure report
and bans the corresponding IP addresses using firewall rules.  Read more on securing Ubuntu in the cloud here.

Below is the contents of my /scripts-folder/5.sendbackupemail.sh (sensitive information removed).

This script sends an email and attaches a zip file of all log files generated through the backup process.

cat /scripts/5.sendbackupemail.sh
#!/bin/bash

echo "$(date) 5.sendbackupemail.sh ...." >> /backup-folder/backup.log

echo "Zipping up log Files.";

zip -r -9 /backup-folder/backup-log.zip /backup-folder/*.log

echo "Sending Email";
sendemail -f [email protected] -t [email protected] -u "Backup Alert" -m "server01 has been backed up" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp Y0urGSu1tePasswordG0e$Here123 -a /backup-folder/backup-log.zip

Read my guide on setting up sendmail here.

Security Considerations

You should never store passwords in scripts that talk to SSH connections, create MySQL dumps or when talking to email servers, I will update this guide when I solving all of these cases.  Also, create the least access required for user accounts where possible.

Target Server Configuration

Alos you can see in /scripts-folder/4.syncfiles.sh that I am saving to the ‘/backup-folder/1.www.server01.com/’ folder, you can make as many folders as you want to make the most of the backup server.  I would advise you not use the server for anything else like web servers and apps as this server is holding important stuff.

backup-server.yourdomain.com:/backup-folder/1.www.server01.com/

I have a handy script to delete all backups (handy during testing).

#!/bin/bash

echo "Deleting Backup Folders..........................................";

echo " Deleting /backup-folder/1.www.server01.com";
rm -R /backup-folder/1.www.server01.com

echo " Deleting /backup-folder/2.www.server02.com";
rm -R /backup-folder/2.www.server02.com

echo " Deleting /backup-folder/3.www.server03.com";
rm -R /backup-folder/3.www.server03.com

echo " Deleting /backup-folder/4.www.server04.com";
rm -R /backup-folder/4.www.server04.com

echo " Deleting /backup-folder/5.www.server05.com";
rm -R /backup-folder/5.www.server05.com

echo " Deleting /backup-folder/6.www.server06.com";
rm -R /backup-folder/6.www.server06.com

echo " Deleting /backup-folder/7.www.server07.com";
rm -R /backup-folder/7.www.server07.com

echo " Deleting /backup-folder/8.www.server08.com";
rm -R /backup-folder/8.www.server08.com

echo "
";

echo "Creating Backup Folders.........................................";

echo " Making folder /backup-folder/1.www.server01.com";
mkdir /backup-folder/1.www.server01.com

echo " Making folder /backup-folder/2.www.server02.com";
mkdir /backup-folder/2.www.server02.com

echo " Making folder /backup-folder/3.www.server03.com";
mkdir /backup-folder/3.www.server03.com";

echo " Making folder /backup-folder/4.www.server04.com";
mkdir /backup-folder/4.www.server04.com

echo " Making folder /backup-folder/5.www.server04.com";
mkdir /backup-folder/5.www.server04.com

echo " Making folder /backup-folder/6.www.server05.com";
mkdir /backup-folder/6.www.server04.com

echo " Making folder /backup-folder/7.www.server06.com";
mkdir /backup-folder/7.www.server04.com

echo " Making folder /backup-folder/8.www.server07.com";
mkdir /backup-folder/8.www.server08.com

echo "
";

echo "Backup Folder Contents.........................................";
ls /backup-folder -al
echo "
";

echo "Folder Strcuture...............................................";
cd /backup-folder
pwd
tree -a -f -p -h  -l -R

echo "
";

echo "How big is the backup folder...................................";
du -hs /backup-folder

echo "
";

echo "Done...........................................................";

Ensure your backup server is just for backups and only allows traffic from known IP’s

ufw status
Status: active

To                         Action      From
--                         ------      ----
22                         ALLOW       123.123.123.123
22                         ALLOW       123.123.123.124
22                         ALLOW       123.123.123.125
22                         ALLOW       123.123.123.126
22                         ALLOW       123.123.123.127
22                         ALLOW       123.123.123.128
22                         ALLOW       123.123.123.129
22                         ALLOW       123.123.123.130
53                         ALLOW       Anywhere

22                         ALLOW OUT   123.123.123.123
22                         ALLOW OUT   123.123.123.124
22                         ALLOW OUT   123.123.123.125
22                         ALLOW OUT   123.123.123.126
22                         ALLOW OUT   123.123.123.127
22                         ALLOW OUT   123.123.123.128
22                         ALLOW OUT   123.123.123.129
22                         ALLOW OUT   123.123.123.130

Change the 123.x.x.x servers to your servers IP’s

Tip: Keep an eye on the backups with tools like ncdu

sudo ncdu /backup-folder
ncdu 1.11 ~ Use the arrow keys to navigate, press ? for help
--- /backup ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    1.0 GiB [##########] /6.www.server01.com
  462.1 MiB [####      ] /1.www.server02.com
  450.1 MiB [####      ] /5.www.server03.com
   60.1 MiB [          ] /2.www.server04.com
  276.0 KiB [          ] /3.www.server05.com
  276.0 KiB [          ] /4.www.server06.com
e   4.0 KiB [          ] /8.www.server07.com
e   4.0 KiB [          ] /7.www.server08.com

Installing SSH on OSX

If you want to backup to this server with OSyouou will need to install sshpass

curl -O -L http://downloads.sourceforge.net/project/sshpass/sshpass/1.06/sshpass-1.06.tar.gz && tar xvzf sshpass-1.06.tar.gz
cd sshpass-1.06
./configure
sudo make install

sshpass should be installed

sshpass -V
sshpass 1.06
(C) 2006-2011 Lingnu Open Source Consulting Ltd.
(C) 2015-2016 Shachar Shemesh
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.

Using "assword" as the default password prompt indicator.

I have not got sshp[ass working yet error “Host key verification failed.”  I had to remove the back known host from ~/.ssh/known_hosts” on OSX

But this worked on OSX

rsync -a -e 'ssh -p 22 ' --progress -P ~/Desktop [email protected]:/backup/8.Mac/

Note: Enter the servers [email protected] before the hostname or rsync will use the logged in OSX username

Don’t forget to check the backup serves disk usage often.

disk usage screenshot

Output from backing up an incremental update (1x new folder)

localhost:~ local-account$ rsync -a -e  'ssh -p 22 ' --progress -P /Users/local-account/folder-to-backup [email protected]:/backup/the-computer/
[email protected]'s password: 
building file list ... 
51354 files to consider
folder-to-backup/
folder-to-backup/TestProject/
folder-to-backup/TestProject/.git/
folder-to-backup/TestProject/.git/COMMIT_EDITMSG
          15 100%    0.00kB/s    0:00:00 (xfer#1, to-check=16600/51354)
folder-to-backup/TestProject/.git/HEAD
          23 100%   22.46kB/s    0:00:00 (xfer#2, to-check=16599/51354)
folder-to-backup/TestProject/.git/config
         137 100%  133.79kB/s    0:00:00 (xfer#3, to-check=16598/51354)
folder-to-backup/TestProject/.git/description
          73 100%   10.18kB/s    0:00:00 (xfer#4, to-check=16597/51354)
folder-to-backup/TestProject/.git/index
        1581 100%  220.56kB/s    0:00:00 (xfer#5, to-check=16596/51354)
folder-to-backup/TestProject/.git/hooks/
folder-to-backup/TestProject/.git/hooks/README.sample
         177 100%   21.61kB/s    0:00:00 (xfer#6, to-check=16594/51354)
folder-to-backup/TestProject/.git/info/
folder-to-backup/TestProject/.git/info/exclude
          40 100%    4.88kB/s    0:00:00 (xfer#7, to-check=16592/51354)
folder-to-backup/TestProject/.git/logs/
folder-to-backup/TestProject/.git/logs/HEAD
         164 100%   20.02kB/s    0:00:00 (xfer#8, to-check=16590/51354)
folder-to-backup/TestProject/.git/logs/refs/
folder-to-backup/TestProject/.git/logs/refs/heads/
folder-to-backup/TestProject/.git/logs/refs/heads/master
         164 100%   20.02kB/s    0:00:00 (xfer#9, to-check=16587/51354)
folder-to-backup/TestProject/.git/objects/
folder-to-backup/TestProject/.git/objects/05/
folder-to-backup/TestProject/.git/objects/05/0853a802dd40cad0e15afa19516e9ad94f5801
        2714 100%  294.49kB/s    0:00:00 (xfer#10, to-check=16584/51354)
folder-to-backup/TestProject/.git/objects/11/
folder-to-backup/TestProject/.git/objects/11/729e81fc116908809fc17d60c8604aa43ec095
         105 100%   11.39kB/s    0:00:00 (xfer#11, to-check=16582/51354)
folder-to-backup/TestProject/.git/objects/23/
folder-to-backup/TestProject/.git/objects/23/768a20baaf8aa0c31b0e485612a5e245bb570d
         131 100%   12.79kB/s    0:00:00 (xfer#12, to-check=16580/51354)
folder-to-backup/TestProject/.git/objects/27/
folder-to-backup/TestProject/.git/objects/27/3375fc70381bd2608e05c03e00ee09c42bdc58
         783 100%   76.46kB/s    0:00:00 (xfer#13, to-check=16578/51354)
folder-to-backup/TestProject/.git/objects/2a/
folder-to-backup/TestProject/.git/objects/2a/507ef5ea3b1d68c2d92bb4aece950ef601543e
         303 100%   26.90kB/s    0:00:00 (xfer#14, to-check=16576/51354)
folder-to-backup/TestProject/.git/objects/2b/
folder-to-backup/TestProject/.git/objects/2b/f8bd93d56787a7548c7f8960a94f05c269b486
         136 100%   12.07kB/s    0:00:00 (xfer#15, to-check=16574/51354)
folder-to-backup/TestProject/.git/objects/2f/
folder-to-backup/TestProject/.git/objects/2f/900764e9d12d8da7e5e01ba34d2b7b2d95ffd4
         209 100%   17.01kB/s    0:00:00 (xfer#16, to-check=16572/51354)
folder-to-backup/TestProject/.git/objects/36/
folder-to-backup/TestProject/.git/objects/36/d2c80d8893178d7e1f2964085b273959bfdc28
         201 100%   16.36kB/s    0:00:00 (xfer#17, to-check=16570/51354)
folder-to-backup/TestProject/.git/objects/3d/
folder-to-backup/TestProject/.git/objects/3d/e5a02083dbe9c23731a38901dca9e913c04dd0
         130 100%   10.58kB/s    0:00:00 (xfer#18, to-check=16568/51354)
folder-to-backup/TestProject/.git/objects/40/
folder-to-backup/TestProject/.git/objects/40/40592d8d4d886a5c81e1369ddcde71dd3b66b5
         841 100%   63.18kB/s    0:00:00 (xfer#19, to-check=16566/51354)
folder-to-backup/TestProject/.git/objects/87/
folder-to-backup/TestProject/.git/objects/87/60f48ddbc9ed0863e3fdcfce5e4536d08f9b8d
          86 100%    6.46kB/s    0:00:00 (xfer#20, to-check=16564/51354)
folder-to-backup/TestProject/.git/objects/a9/
folder-to-backup/TestProject/.git/objects/a9/e6a23fa34a5de4cd36250dc0d797439d85f2ea
         306 100%   22.99kB/s    0:00:00 (xfer#21, to-check=16562/51354)
folder-to-backup/TestProject/.git/objects/b0/
folder-to-backup/TestProject/.git/objects/b0/4364089fdc64fe3b81bcd41462dd55edb7a001
          57 100%    4.28kB/s    0:00:00 (xfer#22, to-check=16560/51354)
folder-to-backup/TestProject/.git/objects/be/
folder-to-backup/TestProject/.git/objects/be/3b93d6d8896d69670f1a8e26d1f51f9743d07e
          60 100%    4.19kB/s    0:00:00 (xfer#23, to-check=16558/51354)
folder-to-backup/TestProject/.git/objects/d0/
folder-to-backup/TestProject/.git/objects/d0/524738680109d9f0ca001dad7c9bbf563e898e
         523 100%   36.48kB/s    0:00:00 (xfer#24, to-check=16556/51354)
folder-to-backup/TestProject/.git/objects/d5/
folder-to-backup/TestProject/.git/objects/d5/4e024fe16b73e5602934ef83e0b32a16243a5e
          69 100%    4.49kB/s    0:00:00 (xfer#25, to-check=16554/51354)
folder-to-backup/TestProject/.git/objects/db/
folder-to-backup/TestProject/.git/objects/db/3f0ce163c8033a175d27de6a4e96aadc115625
          59 100%    3.84kB/s    0:00:00 (xfer#26, to-check=16552/51354)
folder-to-backup/TestProject/.git/objects/df/
folder-to-backup/TestProject/.git/objects/df/cad4828b338206f0a7f18732c086c4ef959a7b
          51 100%    3.32kB/s    0:00:00 (xfer#27, to-check=16550/51354)
folder-to-backup/TestProject/.git/objects/ef/
folder-to-backup/TestProject/.git/objects/ef/e6d036f817624654f77c4a91ae6f20b5ecbe9d
          94 100%    5.74kB/s    0:00:00 (xfer#28, to-check=16548/51354)
folder-to-backup/TestProject/.git/objects/f2/
folder-to-backup/TestProject/.git/objects/f2/b43571ec42bad7ac43f19cf851045b04b6eb29
         936 100%   57.13kB/s    0:00:00 (xfer#29, to-check=16546/51354)
folder-to-backup/TestProject/.git/objects/fd/
folder-to-backup/TestProject/.git/objects/fd/f3f97d1b6e9d8d29bb69a88c4d89ca752bd937
         807 100%   49.26kB/s    0:00:00 (xfer#30, to-check=16544/51354)
folder-to-backup/TestProject/.git/objects/info/
folder-to-backup/TestProject/.git/objects/pack/
folder-to-backup/TestProject/.git/refs/
folder-to-backup/TestProject/.git/refs/heads/
folder-to-backup/TestProject/.git/refs/heads/master
          41 100%    2.50kB/s    0:00:00 (xfer#31, to-check=16539/51354)
folder-to-backup/TestProject/.git/refs/tags/
folder-to-backup/TestProject/TestProject.xcodeproj/
folder-to-backup/TestProject/TestProject.xcodeproj/project.pbxproj
       11476 100%  659.24kB/s    0:00:00 (xfer#32, to-check=16536/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/contents.xcworkspacedata
         156 100%    8.96kB/s    0:00:00 (xfer#33, to-check=16534/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/xcuserdata/
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/xcuserdata/simon.xcuserdatad/
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/xcuserdata/simon.xcuserdatad/UserInterfaceState.xcuserstate
        8190 100%  470.47kB/s    0:00:00 (xfer#34, to-check=16531/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/xcschemes/
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/xcschemes/TestProject.xcscheme
        3351 100%  192.50kB/s    0:00:00 (xfer#35, to-check=16527/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/xcschemes/xcschememanagement.plist
         483 100%   27.75kB/s    0:00:00 (xfer#36, to-check=16526/51354)
folder-to-backup/TestProject/TestProject/
folder-to-backup/TestProject/TestProject/AppDelegate.swift
        2172 100%  117.84kB/s    0:00:00 (xfer#37, to-check=16524/51354)
folder-to-backup/TestProject/TestProject/Info.plist
        1442 100%   78.23kB/s    0:00:00 (xfer#38, to-check=16523/51354)
folder-to-backup/TestProject/TestProject/ViewController.swift
         505 100%   27.40kB/s    0:00:00 (xfer#39, to-check=16522/51354)
folder-to-backup/TestProject/TestProject/Assets.xcassets/
folder-to-backup/TestProject/TestProject/Assets.xcassets/AppIcon.appiconset/
folder-to-backup/TestProject/TestProject/Assets.xcassets/AppIcon.appiconset/Contents.json
        1077 100%   58.43kB/s    0:00:00 (xfer#40, to-check=16519/51354)
folder-to-backup/TestProject/TestProject/Base.lproj/
folder-to-backup/TestProject/TestProject/Base.lproj/LaunchScreen.storyboard
        1740 100%   94.40kB/s    0:00:00 (xfer#41, to-check=16517/51354)
folder-to-backup/TestProject/TestProject/Base.lproj/Main.storyboard
        1695 100%   91.96kB/s    0:00:00 (xfer#42, to-check=16516/51354)

sent 1243970 bytes  received 1220 bytes  75466.06 bytes/sec
total size is 10693902652  speedup is 8588.17

Update with no files to upload

localhost:~ local-account$ rsync -a -e  'ssh -p 22 ' --progress -P /Users/local-account/folder-to-backup [email protected]:/backup/the-computer/
[email protected]'s password: 
building file list ... 
51354 files to consider

sent 1198459 bytes  received 20 bytes  82653.72 bytes/sec
total size is 10693902652  speedup is 8922.90

Backup is easy..

rsync -a -e  'ssh -p 22 ' --progress -P /Users/local-account/folder-to-backup [email protected]:/backup/the-computer/

If you want incremental and full backups try Duplicity.

Hope this helps.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 Duplicity

Filed Under: Advice, AWS, Backup, Cloud, Development, Digital Ocean, Domain, Firewall, MySQL, Networking, Security, Share, Transfer, Ubuntu, VM, Vultr Tagged With: Backup, bash script, rsync, send email, server

Moving WordPress to a new self managed server away from CPanel

August 12, 2017 by Simon

I recently moved my domain from a C-Panel hosted domain (and Email to Google G Suite (my guide here)) to a self-managed Digital Ocean domain (my LetsEncrypt Guide here, my Digital Ocean guide here, my AWS setup guide here, my Vultr setup guide here) and needed to transfer my WordPress site.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I had issues with a CPanel domain on the hosts (Uber/Netregistry) as they were migrating domains and the NetRegstry chat rep said I needed to phone Uber for support. No thanks, I’m going self-managed and saving a dollar.

I backed up my old WordPress site files with FTP (and my database (sql export) on my old site and zipped it up and uploaded it to a public location as I did not want to set up FTP on my new site. More info on this here.

cd /www/blog/
curl -o tmp.zip http://www.serverwhereiuploadedthefile.com/wordpress-archive.zip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  914M  100  914M    0     0  6176k      0  0:02:31  0:02:31 --:--:-- 8053k

This downloaded mg 1GB backup zip.

I install unzip onto Ubuntu.

sudo apt-get install unzip

I unzipped the WordPress instance with Unzip (my blog lives under a blog folder).

unzip tmp.zip -d /www/blog

I created an /index.html to redirect people to the blog folder.

<html>
<head>
<meta http-equiv="Refresh" content="1; url=http://simon.fearby.com/blog/">
</head>
<body>Loading <a href="http://thesubdomain.thedomain.com/blog/">http://thesubdomain.thedomain.com/blog/</a></body></html>

I updated my NGINX configuration to ensure WordPress worked.

location /blog/ {
        try_files $uri $uri/ /index.php?q=$uri&$args;
        index index.php index.html index.htm;
        proxy_set_header Proxy "";
}

I checked my databases with mysql (command line) to ensure MySQL was up.

mysql -u root -p
Enter password: ************************************
mysql> SHOW DATABSES;

I uploaded https://www.adminer.org/ to my site. I opened my wp-config.php and got db and username and created the user and assigned it to the database in Adminer.

I had a SQL backup of my site buy from experience I had not modify the SQL file before importing it (replace MyISAM with InnoDB).

You can read more about me previously moving a CPanel domain with email to a self managed VPS and Gmail.

I tried to import my 7mb sql file into the database with Adminer.

But it kept timing out (I suspect PHP upload limits were set to 2MB).

I fond my php.ini files (4 of them). No PHP file was found when I ran which PHP command was run and opened that folder.

find / -name php.ini
/etc/php/7.0/apache2/php.ini
/etc/php/7.0/fpm/php.ini
/etc/php/7.0/cli/php.ini
/etc/php/7.0/phpdbg/php.ini

I had to make the following PHP.ini changes to allow the larger file size to restore uploads with the Adminer utility (default is 2mb).

post_max_size = 20M
upload_max_filesize = 20M
 
# do change back to 2MB after you restore the files to prevent DOS attacks

I had to make the following changes to nginx.conf (to prevent 404 errors on the database upload)

client_max_body_size 20M;
# client_max_body_size 2M; Reset when done

I restarted nginx and php

nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart

I was now able to import the database

I navigated to the WordPress and all was good (even had SSL as I set that up on the new server)..\\

Conclusion

I now have a faster WordPress on a server I manager with SSL.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.1 NetRegistry info

Filed Under: Backup, MySQL, Transfer, Wordpress Tagged With: move, wordpress

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT