• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

MySQL

HomePi – Raspberry PI powered touch screen showing information from house-wide sensors

March 14, 2022 by Simon

This post is a work in progress (14/3/2022, v0.9.63 – PCB’s v0.2 Designed and Ordered

Summary

After watching this video from Jeff Geerling (demonstrating how to build a Air Quality Sensor) I have decided to make 2. but why not build something bigger?

I want to make a RaspBerry Pi server with a touch screen to receive data from a dozen other WeMos Sensors that I will build.

The Plan

Below is a rough plan of what I am building

In a nutshell, it will be 20x WeMos Sensors recording

Picture of20x WeMoss Sensors, weather station and co2 sensors talking to an api that saves to MySQL then mysql being ready buy a webpage and touch screen panel

I ordered all the parts from Amazon, BangGood, AliExpress, eBay, Core Electronics and Kogan.

Fresh Bullseye Install (Buster upgrade failed)

On 21/11/2021 I tried to manually update Buster to Bullseye (without taking a backup first (bad idea)). I followed this guide to reinstall Rasbian from scratch (this with Bullseye)

Storage Type

Before I begin I need to decide on what storage media to use on the Raspberry Pi. I hate how unreliable and slow MicroSD cards. I tried using an old 128GB SATA SSD, a 1TB Magnetic Hard Drive, a SATA M.2 SSD and NVME M.2 in a USB caddy.

I decided to use a spare 250GB SATA based M.2 Solid State from my son’s PC in Geekworm X862 SATA M.21 Expansion board.

With this board I can bolt the M.2 Solid State Drive into a expansion board under the pi and Power it from the RaspBerry Pi USB Port.

Nice and tidy

I zip-tied a fan to the side of the boards to add a little extra airflow over the solid state drive

32Bit, 64Bit, Linux or Windows

Before I begin I set up Raspbian on an empty Micro SD card (just to boot it up and flash the firmware to the latest version). This is very easy and documented elsewhere. I needed the latest firmware to ensure boort from USB Drive (not Micro SD card was working).

I ran rpi-update and flashed the latest firmware onto my Raspberry Pi. Good, write up here.

When my Raspberry Pi had the latest firmware I used the Raspberry Pi Imager to install the 32 Bit Raspberry Pi OS.

I do have a 8GB Raspberry Pi 4 B, 64Bit Operating Systems do exist but I stuck with 32 bit for compatibility.

Ubuntu 64bit for Raspberry Pi Links

  • Install Ubuntu on a Raspberry Pi | Ubuntu
    • Server Setup Links
      • How to install Ubuntu Server on your Raspberry Pi | Ubuntu
    • Desktop Setup Links
      • How to install Ubuntu Desktop on Raspberry Pi 4 | Ubuntu

Windows 10 for Raspberry Pi Links
https://docs.microsoft.com/en-us/windows/iot-core/tutorials/quickstarter/prototypeboards
https://docs.microsoft.com/en-us/answers/questions/492917/how-to-install-windows-10-iot-core-on-raspberry-pi.html
https://docs.microsoft.com/en-us/windows/iot/iot-enterprise/getting_started

Windows 11 for Raspberry Pi Links
https://www.youtube.com/user/leepspvideo
https://www.youtube.com/watch?v=WqFr56oohCE
https://www.worproject.ml

Setting up the Raspberry Pi Server

These are the steps I used to setup my Pi

Dedicated IP

Before I began I ran ifconfig on my Pi to obtain my Raspberry Pi’s wireless cards mac address. I logged into my Router and setup a dedicated IP (192.168.0.50), this way I can have a IP address thta remains the same.

Hostname

I set my hostname here

sudo nano /etc/hosts
sudo nano /etc/hostname

I verified my hostname with this command

hostname

I verified my IP address with this command

hostname -I

Samba Share

I setup the Samba service to allow me to copy files to and from the Pi

sudo apt-get install samba samba-common-bin
sudo apt-get update

I made a folder to share files

 mkdir ~/share

I edited the Samba config file

sudo nano /etc/samba/smb.conf

In the config file I set my workgroup settings


workgroup = Hyrule
wins support = yes

I defined a share at the bottom of the config file (and saved)

[PiShare]
comment=Raspberry Pi Share
path=/home/pi/share
browseable=Yes
writeable=Yes
only guest=no
create mask=0777
directory mask=0777
public=no

I set a smb password

sudo smbpasswd -a pi
New SMB password: ********
Retype new SMB password: ********

I tested the share froma Windows PC

And the share is accessible on the Raspberry Pi

Great, now I can share files with drag and drop (instead of via SCP)

Mono

I know how to code C# Windows Executables, I have 25 years experince. I do nt want to learn Java or Python to code a GUI application for a touch screen if possible.

I setup Mono from Home | Mono (mono-project.com) to be anbe to run Windows C# EXE’s on Rasbian

sudo apt install apt-transport-https dirmngr gnupg ca-certificates

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb https://download.mono-project.com/repo/debian stable-raspbianbuster main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list

sudo apt update

sudo apt install mono-devel

I copied an EXE I wrote in C# on Windows and ran it with Mono

sudo mono ~/HelloWorld.exe
Exe Test OK

This worked.

Nginx Web Server

I Installed NginX and configured it

sudo apt-get install nginx

I created a /www folder for nginx

sudo mkdir /www

I created a place-holder file in the www root

sudo nano /wwww/index.html

I set permissions to allow Nginx to access /www

sudo chown -R www-data:www-data /www

I edited the NginX config as required

sudo nano /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/nginx.conf 

I tested and reloaded the nginx config


sudo nginx -t
sudo nginx -s reload
sudo systemctl start nginx

I started NginX

sudo systemctl start nginx

I tested nginx in a web browser

NodeJS/NPM

I installed NodeJS

sudo apt update
sudo apt install nodejs npm -y

I verified Node was installed

nodejs --version
> v12.22.5

PHP

I installed PHP

sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update

sudo apt install -y php8.0-common php8.0-cli php8.0-xml

I verified PHP with this command

php --version

> PHP 8.0.13 (cli) (built: Nov 19 2021 06:40:53) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.13, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.13, Copyright (c), by Zend Technologies

I installed PHP-FPM

sudo apt-get install php8.0-fpm

I verified the PHP FPM sock was available before adding it to the NGINX Config

sudo ls /var/run/php*/**.sock
> /var/run/php/php8.0-fpm.sock  /var/run/php/php-fpm.sock

I reviewed PHP Settings

sudo nano /etc/php/8.0/cli/php.ini 
sudo nano /etc/php/8.0/fpm/php.ini

I created a /www/ppp.php file with this contents

<?php
  phpinfo(); // all info
  // module info, phpinfo(8) identical
  phpinfo(INFO_MODULES);
?>

PHP is working

PHP Test OK

I changed these php.ini settings (fine for local development).

max_input_vars = 1000
memory_limit = 1024M
max_file_uploads = 20M
post_max_size = 20M
display_errors = on

MySQL Database

I installed MariaDB

sudo apt install mariadb-server

I updated my pi password

passwd

I ran the Secure MariaDB Program

sudo mysql_secure_installation

After setting each setting I want to run mysql as root to test mysql

PHPMyAdmin

I installed phpmyadmin to be able to edit mysql databases via the web

I followed this guide to setup phpmyadmin via lighthttp and then via nginx

I then logged into MySQL, set user permissions, create a test database and changes settings as required.

NginX to NodeJS API Proxy

I edited my NginX config to create a NodeAPI Proxy

Test Webpage/API

Todo

I installed PM2 the NodeJS agent software

sudo npm install -g pm2 

Node apps can be started as a service from cli

pm2 start api_v1.js

PM2 status

pm2 status

You can delete node apps from PM2 (if desired)

pm2 delete api_v1.js

Sending Email from CLI

I setup send email to allow emails to be sent from the cli with these commands

 sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail  

I logged into my GSuite account and setup an alias and app password to use.

Now I can send emails from the CLI

sudo sendemail -f [email protected] -t [email protected] -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp **************

I added this to a Bash script (“/Scripts/Up.sh”) and added an event to send an email every 6 hours

7 Inch Full View LCD IPS Touch Screen 1024*600 

I purchased a 7″ Touch screen from Banggood. I got a head up from the following Video.

I plugged in the touch USB cable to my Pi’s USB3 port. I pliugged the HDMI adapter into the screen and the pi (with the supplied mini plug).

I turned on the pi and it work’s and looks amazing.

This is displaying a demo C# app I wrote. It’s running via mono.

I did have to add the following to config.txt to bet native resolution. The manual on the supplied CD was helpful (but I did not check it at first).

max_usb_current=1
hdmi_force_hotplug=1
config_hdmi_boost=7
hdmi_group=2
hdmi_mode=1
hdmi_mode=87 
hdmi_drive=1
display_rotate=0
hdmi_cvt 1024 600 60 6 0 0 0
framebuffer_width=1024
framebuffer_height=600

PiJuice UPS HAT

I purchased an external LiPi UPS to keep the raspberry pi fed with power (even when the power goes out)

The stock battery was not charged and was quite weak when I first installed it. Do fully charge the battery before testing.

PiJuice

Stock Battery = 3.7V @ 1820mAh

Stock Battery = 3.7V @ 1820mAh

Below are screenshots so the PIJuice Setup.

PiJuice HAT Settings

PiJuice General Settings

General Settings

There is an option to set events for every button

Extensive screen to set button events

LED Status color and function

Set LED status and color

IO for the PiJuice Input. I will sort this out later.

PiJuice IO settings

A new firmware was available. I had v1.4

Update firmware screen

I updated the firmware

Firmware update worked

Firmware flash success

Battery settings

Battery Settings

PiJuice Button Config

Button config

Wake Up Alarm time and RTC

Clock Settings

System Settings

System Settings

System Events

system settings page

User Scripts

Define user scripts

I ordered a bigger battery as my Screen, M.2, Fan and UPS consume near the maximum of the stock battery.

10,000mAh battery

After talking with the seller of the battery they advised I setup the 10,000mAh battery using the 1,000mAh battery setup in PiJuice but change the Capacity and Charge Current

  • Capacity = 10000C
  •  cutoff voltage

And for battery longevity set the 

  • Cutoff voltage: 3,250mv

Final Battery Setup

Battery settings based off 1000mAh battery profile , Capacity 10,000 mAh, Charge current 850 and Cutoff 3250mV

WeMos Setup

I orderd 20x Wemos Mini D1 Pro (16Mbit) clones to use to run the sensors. I soldered the legs on in batches of 8

WeMos installed on breadboards ready to solder pins

Soldering was not perfect but worked

20x soldered wemos

Soldering is not perfect but each joint was triple tested.

Close up of soldered joints

I only had one dead WeMos.

I will set up the final units on ProtoBoards.

Protoboard

20x Wemos ready for service and the external aerial is glued down. The hot glue was a bad idea, I had to rotate a resistor under the hot glue.

20x wemos ready.

Revision

I ended up reordering the WeMos Mini’s and soldering on Female headers so I can add OLED screens

air mon enclosure

I added female headers to allow an OLED screen

new wemos

I purchased a microscope tpo be able to see better.

microscope

Each sensor will have a mini OLED screen.

mini oled screen

0.66″ OLED Screens

oled screen

I designed a PCB in Photoshop and had it turned into a PCB via https://www.fiverr.com/syedzamin12. I ordered 30x bloards from https://jlcpcb.com/

Custom PCB

The PCB’s fit inside the new enclosure perfectly

I am waiting for smaller screws to arrive.

PCB v0.2

I decided to design a board with 2 switches (and a light sensor to turn the screen off at night)

Breadboard Prototype

Prototype

I spoke to https://www.fiverr.com/syedzamin12 and withing 24 hours a PCB was designed

I Layers

This time I will get a purple PCB from JLCPCB and add a dinosaur for my son

Top PCB View

TOP PCB View

Back PCB View

Back PCB View

3D PC View

3D PCB view

JLCPCB made the board in 3 days

3 days

Now I need to wait a few weeks for the new PCB to arrive

Also, I finsihed the firmware for v0.2 PCB

I ordered some switches

I also ordered some reset buttons

I might add a larger 0.96″ OLED screen

Wifi and Static IP Test

I uploaded a skepch to each WeMos and tested the Wifi and Static IP thta was given.

Sketch

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>


#define SERVER_IP "192.168.0.50"

#ifndef STASSID
#define STASSID "wifi_ssid_name"
#define STAPSK  "************"
#endif

void setup() {

  Serial.begin(115200);

  Serial.println();
  Serial.println();
  Serial.println();

  WiFi.begin(STASSID, STAPSK);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected! IP address: ");
  Serial.println(WiFi.localIP());

}

void loop() {
  // wait for WiFi connection
  if ((WiFi.status() == WL_CONNECTED)) {

    WiFiClient client;
    HTTPClient http;

    Serial.print("[HTTP] begin...\n");
    // configure traged server and url
    http.begin(client, "http://" SERVER_IP "/api/v1/test"); //HTTP
    http.addHeader("Content-Type", "application/json");

    Serial.print("[HTTP] POST...\n");
    // start connection and send HTTP header and body
    int httpCode = http.POST("{\"hello\":\"world\"}");

    // httpCode will be negative on error
    if (httpCode > 0) {
      // HTTP header has been send and Server response header has been handled
      Serial.printf("[HTTP] POST... code: %d\n", httpCode);

      // file found at server
      if (httpCode == HTTP_CODE_OK) {
        const String& payload = http.getString();
        Serial.println("received payload:\n<<");
        Serial.println(payload);
        Serial.println(">>");
      }
    } else {
      Serial.printf("[HTTP] POST... failed, error: %s\n", http.errorToString(httpCode).c_str());
    }

    http.end();
  }

  delay(1000);
}

The Wemos booted, connected to WiFi, set and IP, and tried to post a request to a URL.

........................................................
Connected! IP address: 192.168.0.51
[HTTP] begin...
[HTTP] POST...
[HTTP] POST... failed, error: connection failed

The POST failed because my PI API Server was off.

Touch Screen Enclosure

I constructed a basic enclosure and screwed the touch screen to it. I need to find  aflexible black scrip to put around the screen and cover up the gaps.

Wooden box with the screen in it

The touch screen has been screwed in.

Screen screwed in

Over the Air Updating

I followed this guide and having the WeMos updatable over WiFi.

Basically, I installed the libraries “AsyncHTTPSRequest_Generic”, “AsyncElegantOTA”, “AsyncHTTPRequest_Generic”, “ESPAsyncTCP” and “ESPAsyncWebServer”.

Manage Libraries

A few libraries would not download so I manually downloaded the code from the GitHub repository from Confirm your account recovery settings (github.com) and then extracted them to my Documents\Arduino\libraries folder.

I then opened the exampel project “AsyncElegantOTA\ESP8266_Async_Demo”

I reviewed the code

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>

const char* ssid = "........";
const char* password = "........";

AsyncWebServer server(80);


void setup(void) {
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request) {
    request->send(200, "text/plain", "Hi! I am ESP8266.");
  });

  AsyncElegantOTA.begin(&server);    // Start ElegantOTA
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();
}
I added my Wifi SSID and password, saved the project and compiled a the code and wrote it to my WeMos Mini D1

I added LED Blink Code

void setup(void) {
  ...
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  ...
}
void loop(void) {
 ...
  delay(1000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(1000);                      // Wait for two seconds (to demonstrate the active low LED)
 ...
}

I compiled and tested the code

Now to get new code changes to the WeMos Mini via a binary, I edited the code (chnaged the LED blink speed) and clicked “Export Compiled Binary”

Compole Binary

When the binary compiled I opened the Sketch Folder

Show Sketch folder

I could see a bin file.

Bin File

I loaded the http://192.168.0.51/update and selected the bin file.

The new firmwaere applied.

Flashing

I navighated back to http://192.168.0.51

TIP: Ensure you add the starter sketch that has your wifi details in there.

Password Protection

I changed the code to add a basic passeord on access ad on OTA update

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>


//Saved Wifi Credentials (Research Encruption Later or store in FRAM Module?
const char* ssid = "your-wifi-ssid";
const char* password = "********";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(80);

void setup(void) {
  Serial.begin(115200);       //Serial Mode (Debug)
    
  WiFi.mode(WIFI_STA);        //Client Mode
  WiFi.begin(ssid, password); //Connect to Wifi
 
  Serial.println("");

  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success! ESP8266 #001.");
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);

  
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();

  digitalWrite(LED_BUILTIN, LOW);
  delay(8000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(8000);                      // Wait for two seconds (to demonstrate the active low LED)

}

Password prompt for users accessing the device.

Login scree

Password prompt for admin users accessing the device.

admin password protect

Later I will research encrypting the password and storing it on SPIFFS partition or a FRAM memory module.

Adding the DHT22 Sensors

I received my paxckl of DHT22 Sensors (AMT2302).

Specifications

  • Operating Voltage: 3.5V to 5.5V
  • Operating current: 0.3mA (measuring) 60uA (standby)
  • Output: Serial data
  • Temperature Range: 0°C to 50°C
  • Humidity Range: 20% to 90%
  • Resolution: Temperature and Humidity both are 16-bit
  • Accuracy: ±1°C and ±1%

I wired it up based on this Adafruit post.

DHT22 Wired Up on a breadboard.

DHT22 and Basic API Working

I will not bore you with hours or coding and debugging so here is my code thta

  • Allows the WeMos D1 Mini Prpo (ESP8266) to connect to WiFi
  • Web Server (with stats)
  • Admin page for OTA updates
  • Password Prpotects the main web folder and OTA admin page
  • Reading DHT Sensor values
  • Debug to serial Toggle
  • LED activity Toggle
  • Json Serialization
  • POST DHT22 data to an API on the Raspberry PI
  • Placeholder for API return values
  • Automatically posts data to the API ever 10 seconds
  • etc

Here is the work in progress ESP8288 Code

#include <ESP8266WiFi.h>        // https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/src/ESP8266WiFi.h
#include <ESPAsyncTCP.h>        // https://github.com/me-no-dev/ESPAsyncTCP
#include <ESPAsyncWebServer.h>  // https://github.com/me-no-dev/ESPAsyncWebServer
#include <AsyncElegantOTA.h>    // https://github.com/ayushsharma82/AsyncElegantOTA
#include <ArduinoJson.h>        // https://github.com/bblanchon/ArduinoJson
#include "DHT.h"                // https://github.com/adafruit/DHT-sensor-library
                                // Written by ladyada, public domain

//Todo: Add Authentication
//Fyi: https://api.gov.au/standards/national_api_standards/index.html

#include <ESP8266HTTPClient.h>  //POST Client

//Firmware Stats
bool bDEBUG = true;        //true = debug to Serial output
                           //false = no serial output
//Port Number for the Web Server
int WEB_PORT_NUMBER = 1337; 

//Post Sensor Data Delay
int POST_DATA_DELAY = 10000; 

bool bLEDS = true;         //true = Flash LED
                           //false =   NO LED's
//Device Variables
String sDeviceName = "ESP-002";
String sFirmwareVersion = "v0.1.0";
String sFirmwareDate = "27/10/2021 23:00";

String POST_SERVER_IP = "192.168.0.50";
String POST_SERVER_PORT = "";
String POST_ENDPOINT = "/api/v1/test";

//Saved Wifi Credentials (Research Encryption later and store in FRAM Module?
const char* ssid = "your_wifi_ssid";
const char* password = "***************";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(WEB_PORT_NUMBER);    //Feel free to chnage the port number

//DHT22 Temp Sensor
#define DHTTYPE DHT22   // DHT 22  (AM2302), AM2321
#define DHTPIN 5
DHT dht(DHTPIN, DHTTYPE);

//Common Variables
String thisBoard = ARDUINO_BOARD;
String sHumidity = "";
String sTempC = "";
String sTempF = "";
String sJSON = "{ }";

//DHT Variables
float h;
float t;
float f;
float hif;
float hic;


void setup(void) {

  //Turn On PIN
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  
  //Serial Mode (Debug)
  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  if (bDEBUG) Serial.begin(115200);
  if (bDEBUG) Serial.println("Serial Begin");

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }
  if (bDEBUG) Serial.println("Wifi Setup");
  if (bDEBUG) Serial.println(" - Client Mode");
  
  WiFi.mode(WIFI_STA);        //Client Mode
  
  if (bDEBUG) Serial.print(" - Connecting to Wifi: " + String(ssid));
  WiFi.begin(ssid, password); //Connect to Wifi
 
  if (bDEBUG) Serial.println("");
  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    if (bDEBUG) Serial.print(".");
  }
  if (bDEBUG) Serial.println("");
  if (bDEBUG) Serial.print("- Connected to ");
  if (bDEBUG) Serial.println(ssid);
  
  if (bDEBUG) Serial.print("IP address: ");
  if (bDEBUG) Serial.println(WiFi.localIP());

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  
  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    
        String sendHtml = "";
        sendHtml = sendHtml + "<html>\n";
        sendHtml = sendHtml + " <head>\n";
        sendHtml = sendHtml + " <title>ESP# 002</title>\n";
        sendHtml = sendHtml + " <meta http-equiv=\"refresh\" content=\"5\";>\n";
        sendHtml = sendHtml + " </head>\n";
        sendHtml = sendHtml + " <body>\n";
        sendHtml = sendHtml + " <h1>ESP# 002</h1>\n";
        sendHtml = sendHtml + " <u2>Debug</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Device Name: " + sDeviceName + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Version: " + sFirmwareVersion + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Date: " + sFirmwareDate + " </li>\n";
        sendHtml = sendHtml + " <li>Board: " + thisBoard + " </li>\n";
        sendHtml = sendHtml + " <li>Auto Refresh Root: On </li>\n";
        sendHtml = sendHtml + " <li>Web Port Number: " + String(WEB_PORT_NUMBER) +" </li>\n";
        sendHtml = sendHtml + " <li>Serial Debug: " + String(bDEBUG) +" </li>\n";
        sendHtml = sendHtml + " <li>Flash LED's Debug: " + String(bLEDS) +" </li>\n";
        sendHtml = sendHtml + " <li>SSID: " + String(ssid) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT TYPE: " + String(DHTTYPE) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT PIN: " + String(DHTPIN) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_DATA_DELAY: " + String(POST_DATA_DELAY) +" </li>\n";

        sendHtml = sendHtml + " <li>POST_SERVER_IP: " + String(POST_SERVER_IP) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_ENDPOINT: " + String(POST_ENDPOINT) +" </li>\n";
        
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>Sensor</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Humidity: " + sHumidity + "% </li>\n";
        sendHtml = sendHtml + " <li>Temp: " + sTempC + "c, " + sTempF + "f. </li>\n";
        sendHtml = sendHtml + " <li>Heat Index: " + String(hic) + "c, " + String(hif) + "f.</li>\n";
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>JSON</h2>";
        
        // Allocate the JSON document Object/Memory
        // Use https://arduinojson.org/v6/assistant to compute the capacity.
        StaticJsonDocument<250> doc;
        //JSON Values     
        doc["Name"] = sDeviceName;
        doc["humidity"] = sHumidity;
        doc["tempc"] = sTempC;
        doc["tempf"] = sTempF;
        doc["heatc"] = String(hic);
        doc["heatf"] = String(hif);
        
        sJSON = "";
        serializeJson(doc, sJSON);
        
        sendHtml = sendHtml + " <ul>" + sJSON + "</ul>\n";
        
        sendHtml = sendHtml + " <u2>Seed</h2>";
        long randNumber = random(100000, 1000000);
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <p>" + String(randNumber) + "</p>\n";
        sendHtml = sendHtml + " </ul>\n";
       
        sendHtml = sendHtml + " </body>\n";
        sendHtml = sendHtml + "</html>\n";
        //Send the HTML   
        request->send(200, "text/html", sendHtml);
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);
  
  server.begin();
  if (bDEBUG) Serial.println("HTTP server started");
 
  if (bDEBUG) Serial.println("Board: " + thisBoard);

  //Setup the DHT22 Object
  dht.begin();
  
}

void loop(void) {

  AsyncElegantOTA.loop();

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }


  //Display Temp and Humidity Data

  h = dht.readHumidity();
  t = dht.readTemperature();
  f = dht.readTemperature(true);

  // Check if any reads failed and exit early (to try again).
  if (isnan(h) || isnan(t) || isnan(f)) {
    if (bDEBUG) Serial.println(F("Failed to read from DHT sensor!"));
    return;
  }
  
  hif = dht.computeHeatIndex(f, h);         // Compute heat index in Fahrenheit (the default)
  hic = dht.computeHeatIndex(t, h, false);  // Compute heat index in Celsius (isFahreheit = false)

  if (bDEBUG) Serial.print(F("Humidity: "));
  if (bDEBUG) Serial.print(h);
  if (bDEBUG) Serial.print(F("%  Temperature: "));
  if (bDEBUG) Serial.print(t);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(f);
  if (bDEBUG) Serial.print(F("°F  Heat index: "));
  if (bDEBUG) Serial.print(hic);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(hif);
  if (bDEBUG) Serial.println(F("°F"));

  //Save for Page Load
  sHumidity = String(h,2);
  sTempC = String(t,2);
  sTempF = String(f,2);

  //Post to Pi API
    // Allocate the JSON document Object/Memory
    // Use https://arduinojson.org/v6/assistant to compute the capacity.
    StaticJsonDocument<250> doc;
    //JSON Values     
    doc["Name"] = sDeviceName;
    doc["humidity"] = sHumidity;
    doc["tempc"] = sTempC;
    doc["tempf"] = sTempF;
    doc["heatc"] = String(hic);
    doc["heatf"] = String(hif);
    
    sJSON = "";
    serializeJson(doc, sJSON);

    //Post to API
    if (bDEBUG) Serial.println(" -> POST TO API: " + sJSON);

   //Test POST
  
    if ((WiFi.status() == WL_CONNECTED)) {
  
      WiFiClient client;
      HTTPClient http;
  
    
      if (bDEBUG) Serial.println(" -> API Endpoint: http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT);
      http.begin(client, "http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT); //HTTP


      if (bDEBUG) Serial.println(" -> addHeader: \"Content-Type\", \"application/json\"");
      http.addHeader("Content-Type", "application/json");
  
      // start connection and send HTTP header and body
      int httpCode = http.POST(sJSON);
      if (bDEBUG) Serial.print("  -> Posted JSON: " + sJSON);
  
      // httpCode will be negative on error
      if (httpCode > 0) {
        // HTTP header has been send and Server response header has been handled

  
        //See https://api.gov.au/standards/national_api_standards/api-response.html 
        // Response from Server
        if (bDEBUG) Serial.println("  <- Return Code: " + httpCode);
                
        //Get the Payload
        const String& payload = http.getString();
          if (bDEBUG) Serial.println("   <- Received Payload:");
          if (bDEBUG) Serial.println(payload);
          if (bDEBUG) Serial.println("   <- Payload (httpcode: 201):");
          

         //Hnadle the HTTP Code
        if (httpCode == 200) {
          if (bDEBUG) Serial.println("  <- 200: Invalid API Call/Response Code");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 201) {
          if (bDEBUG) Serial.println("  <- 201: The resource was created. The Response Location HTTP header SHOULD be returned to indicate where the newly created resource is accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 202) {
          if (bDEBUG) Serial.println("  <- 202: Is used for asynchronous processing to indicate that the server has accepted the request but the result is not available yet. The Response Location HTTP header may be returned to indicate where the created resource will be accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 400) {
          if (bDEBUG) Serial.println("  <- 400: The server cannot process the request (such as malformed request syntax, size too large, invalid request message framing, or deceptive request routing, invalid values in the request) For example, the API requires a numerical identifier and the client sent a text value instead, the server will return this status code.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 401) {
          if (bDEBUG) Serial.println("  <- 401: The request could not be authenticated.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 403) {
          if (bDEBUG) Serial.println("  <- 403: The request was authenticated but is not authorised to access the resource.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 404) {
          if (bDEBUG) Serial.println("  <- 404: The resource was not found.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 415) {
          if (bDEBUG) Serial.println("  <- 415: This status code indicates that the server refuses to accept the request because the content type specified in the request is not supported by the server");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 422) {
          if (bDEBUG) Serial.println("  <- 422: This status code indicates that the server received the request but it did not fulfil the requirements of the back end. An example is a mandatory field was not provided in the payload.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 500) {
          if (bDEBUG) Serial.println("  <- 500: An internal server error. The response body may contain error messages.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }

        
      } else {
        if (bDEBUG) Serial.println("   <- Unknown Return Code (ERROR): " + httpCode);
        //if (bDEBUG) Serial.printf("    " + http.errorToString(httpCode).c_str());
        
      }

    }

    if (bDEBUG) Serial.print("\n\n");

    delay(POST_DATA_DELAY);
  }

Here is a screenshot of the Arduino IDE Serial Monitor debugging the code

Serial Monitor

Here is a screenshot of the NodeJS API on the raspberry Pi accepting the POSTed data from the ESP8266

API receiving data

Here is a sneak peek of the code accpeing the Posted Data

API COde

The final code will be open sourced.

API with 2x sensors (18x more soon)

I built 2 sensors (on Breadboards) to start hitting the API

2 sensors on a breadboard

18 more sensors are ready for action (after I get tempporary USB power sorted)

18x Sensors

PiJuice and Battery save the Day

I accidentally used my Pi for a few hours (to develop the API) and I realised the power to the PiJuice was not connected.

The PiJuice worked a treat and supplied the Pi from battery

Battery power was disconnected

I plugged in the battery after 25% was drained.

Power Restored/

Research and Setup TRIM/Defrag on the M.2 SSD

Todo: Research

Add a Buzzer to the RaspBerry Pi and Connect to Pi Juice No Power Event

Todo

Wire Up a Speaker to the PiJuice

Todo: Figure out cusrom scripts and add a Piezo Speaker to the PiJuice to alert me of issues in future.

Add buttons to the enclosure

Todo

Add email alerts from the system

I logged into Google G-Suite (my domain’s email provider) and set up an email alias for my domain “[email protected]”, I added this alias to GMail (logged in with my GSuite account.

I created an app-specific password at G-Suite to allow my poi to use a dedicated password to access my email.

I installed these packages on the Raspberry Pi

sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail    

I can run this command to send an email to my primary email

sudo sendemail -f [email protected] -t [email protected]_domain.com -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected]_domain.com -xp ********************

The email arrives from the Raspberry Pi

Test Email Screenshot

PiJuice Alerts (email)

In created some python scripts and configured PiJuice to Email Me

user scripts

I assigned the scripts to Events

Added functions

Python Script (CronJob) to email the batteruy level every 6 hours

Todo

Building the Co2/PM2.5 Sensors

Todo: (Waiting for parts)

The AirGradient PCB’s have arrived

Air Gradient PCB's

NodeJS API writing to MySQL/Influx etc

Todo: Save Data to a Database

Setup 20x WeMos External Antennae’s (DONE, I ordered new factory rotated resistors)

I assumed the external antennae’s on the WeMos D1 Mini Pro’s were using the external antennae. Wrong.

I have to move 20x resistors (1 per WeMos) to switch over the the external antennae.

This will be fun as I added hot glue over the area where the resistior is to hold down the antennae.

Reading configuration files via SPIFFS

Todo

Power over Ethernet (PoE) (SKIP, WIll use plain old USB wall plugs)

Todo: Passive por PoE

Building a C# GUI for the Touch Panel

Todo (Started coding this)

Todo (Passive POE, 5v, 3.3v)?

Building the enclosures for the sensorsDesigned and ordered the PCB, FIrmware next.

Custom PCB?

Yes, See above

Backing up the Raspberry Pi M.2 Drive

This is quite easy as the M.2 Drive is connected to a USB Pliug. I shutdown the Pi and pugged int he M.2 board to my PC

I then Backed up the entire disk to my PC with Acronis Software (review here)

I now have a complete backup of my Pi on a remote network share (and my primary pc).

Version History

v0.9.63 – PCB v0.2 Designed and orderd.

v0.9.62 – 3/2/2022 Update

v0.9.61 – New Nginx, PHP, MySQL etc

v0.9.60 – Fresh Bullseye install (Buster upgrade failed)

v0.951 Email Code in PiJUice

v0.95 Added Email Code

v0.94 Added Todo Areas.

v0.93 2x Sensors hitting the API, 18x sensors ready, Air Gradient

v0.92 DHT22 and Basic API

v0.91 Password Protection

v0.9 Final Battery Setup

v0.8 OTA Updates

v0.7 Screen Enclosure

v0.6 Added Wifi Test Info

v0.5 Initial Post

Filed Under: Analytics, API, Arduino, Cloud, Code, GUI, IoT, Linux, MySQL, NGINX, NodeJS, OS Tagged With: api, ESP8266, MySQL, nginx, raspberry pi, WeMos

Recovering a Dead Nginx, Mysql, PHP WordPress website

July 10, 2021 by Simon

(laptrinhx.com – do not steal this post)

In early 2021 www.fearby.com died and it was all my fault. Here is my breakdown of the events.

On the 4th of January 2021, I woke to see my website not loading.

Cloud flare reporting my websitre was unavailable.

https://www.fearby.com had 2 servers.

  • Web server (www.fearby.com)
  • Database server (db.fearby.com)

Upon investigating why my website was down I found out that WordPress could not talk to the database server. I tried to log into the database server via SSH failed (no response). I tried logging into my db.fearby.com server via the root console and it too did not work.

I was locked out of my own server and memory told me it was caused by my be playing with fail2ban and other system auditing tools a few months earlier.

I tried restoring the db.fearby.com serer from backups (one at a time). I had the last 7 days as individual backups.  I had no luck, all of my backups were no good (I sat on the problem too long).

View of the last 7 days of backups.

In mid-2020 I locked myself out of db.fearby.com (SSH and root console) because I setup an aggressive fail2ban, AIDE intrusion detection system(s) and firewall rules. I could no longer access my db.fearby.com server via SSH or the root console.  The database server was still operational and I foolishly left it running (with no access).

I did not know how to (or had enough time) reset the root password on the Debian server. I was unable to think of a fix to restore my website. I should have reset the root password, it is easy to do thanks to a post from Janne Roustemaa – How to reset root password on cloud server.

A few months ago I finally found out how to reset the root password of a Debian server.

How to Reset the Root Password on a Debian server on UpCloud

I followed Janne Roustemaa’s guide here: How to reset root password on cloud server.

I logged into the Up Cloud Hub.

UpCloud Hub (login page)

I shut down db.fearby.com from the UpCloud Dashboard.

I created a backup of db.fearby.com (just in case). I upgraded my backup plan from 1 backup every 7 days to daily backups for 7 days and weekly backups for 1 month.

Database Backup plan selection

Deploy a temporary server to reset the root password

I deployed a new temporary (cheap) server alongside the dead server in Chicago.

Get $25 free credit on UpCloud and deploy your own server: Use this link to get $25 credit (new UpCloud users only).

Deploy $5/m server in Chicago.

I called the server “recovery.fearby.com” and set Debian 9 and the Operation System (same as the dead server).

Name: Recovery.fearby.com, Debian 9

I added the command “shutdown -h 1” to the Initialization script to ensure the server shuts down after it was deployed. 

I can only add the disk to the new server if the old db.fearby.com server is shut down.

Deploy server

I shut down db.fearby.com and recovery.fearby.com servers.

Shutting down servers GUI

Both servers have shut down

Shutdown

Detach the disk from db.fearby.com

I detach the disk from db.fearby.com server.

In the UpCloud Dashboard, I opened the db.fearby.com and clicked the resize tab

Resize Disk

I clicked the Detach button.

Detach Disk

I clicked Continue

Continue

Attach db.fearby.com disk to recovery.fearby.com

Now I attached this disk as a secondary disk onto the recovery.fearby.com server.

In the UpCloud hub I clicked Servers then selected the recovery.fearby.com server, then clicked the Resize Tab

Resize recovery.fearby.com

I scrolled down and clicked Attach existing storage

Attach Disk

Attach existing storage dialogue

Attach device Dialog

I selected the system disk from the db.fearby.com (that I detached earlier)

Attach Disk

I clicked Add a storage device button

Attach Disk dialog

Now I have attached the storage from db.fearby.com and attached it to recovery.fearby.com as a secondary disk.

2 Disks attached

Starting the recovery.fearby.com server

I started the recovery.fearby.com server by clicking Start

Start

The server is starting

Starting

When the server started, I obtained its IP and connect to it with MobaXTerm.

MobaXTerm SSH Client.

Now I can access the db.fearby.com disk.

I do not want to reset the root password until I undelete the files I need.

Viewing Disks

I ran this command to verify the attached disks.

lsblk

Two disks were visible

2 disk were visible.

Alternatively, I can view partitions with the following command

cat /proc/partitions
major minor  #blocks  name
 254        0   26214400 vda
 254        1   26213376 vda1
 254       16   52428800 vdb
 254       17   52427776 vdb1

I can see partition data with these commands

recovery.fearby.com disk: /dev/vda1

fdisk -l /dev/vda1
Disk /dev/vda1: 25 GiB, 26842497024 bytes, 52426752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

db.fearby.com disk: /dev/vdb1

fdisk -l /dev/vdb1
Disk /dev/vdb1: 50 GiB, 53686042624 bytes, 104855552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I ran this command to mount the second disk

mount /dev/vdb1 /mnt

I looked in the “/mnt/Backup” folder  as this was where I had daily MySQL dumps saving too on the old system.

The folder was empty. It looks like the file system was corrupted.

Finding Deleted Files

I assumed the file system was corrupt, I installed Testdisk as I wanted to recover deleted SQL dump backups in the backup folder. 

sudo apt-get update
sudo apt-get install testdisk

I confirmed testdisk was installed

photorec --version

I ran testdisk and passed in the disk as a parameter

sudo photorec /dev/vdb1

I selected the Disk /dev/vdb1 – 53GB and pressed enter

I selected ext4 partition and pressed enter (not whole disk)

I selected ext2/ext3/ext4 filesystem and pressed enter

I selected Free to only scan in unallocated space and pressed enter

When asked to choose the recovery location to restore files to I selected /recovery (on the recovery.fearby.com disk , not the db.fearby.com disk.

I just realized that the recovery.fearby.com disk is 25Gb and the db.fearby.com disk is 50GB, lets hope I do not have more than 25GB of deleted files (or I will have to delete recovery.fearby.com and deploy a 100GB system) and run undelete again.

Recovered Files

After 20 minutes 32,809 files were recovered to /recovery/recup_dir

I pressed CTRL+C to exit photorec

I changed directory to /recovery

cd /recovery

I counted the recovered files

find . -type f | wc -l
>32810

Recovered Files were placed in sub folders.

drwxr-xr-x 68 root root  4096 Jan 19 13:28 .
drwxr-xr-x 23 root root  4096 Jan 19 13:28 ..
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.1
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.10
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.11
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.12
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.13
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.14
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.15
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.16
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.17
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.18
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.19
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.2
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.20
drwxr-xr-x  2 root root 24576 Jan 19 13:22 recup_dir.21
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.22
drwxr-xr-x  2 root root 24576 Jan 19 13:22 recup_dir.23
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.24
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.25
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.26
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.27
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.28
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.29
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.3
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.30
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.31
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.32
drwxr-xr-x  2 root root 20480 Jan 19 13:22 recup_dir.33
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.34
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.35
drwxr-xr-x  2 root root 36864 Jan 19 13:22 recup_dir.36
drwxr-xr-x  2 root root 32768 Jan 19 13:22 recup_dir.37
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.38
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.39
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.4
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.40
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.41
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.42
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.43
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.44
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.45
drwxr-xr-x  2 root root 20480 Jan 19 13:23 recup_dir.46
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.47
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.48
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.49
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.5
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.50
drwxr-xr-x  2 root root 20480 Jan 19 13:24 recup_dir.51
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.52
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.53
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.54
drwxr-xr-x  2 root root 20480 Jan 19 13:25 recup_dir.55
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.56
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.57
drwxr-xr-x  2 root root 24576 Jan 19 13:26 recup_dir.58
drwxr-xr-x  2 root root 36864 Jan 19 13:26 recup_dir.59
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.6
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.60
drwxr-xr-x  2 root root 20480 Jan 19 13:26 recup_dir.61
drwxr-xr-x  2 root root 20480 Jan 19 13:27 recup_dir.62
drwxr-xr-x  2 root root 20480 Jan 19 13:27 recup_dir.63
drwxr-xr-x  2 root root 20480 Jan 19 13:27 recup_dir.64
drwxr-xr-x  2 root root 20480 Jan 19 13:28 recup_dir.65
drwxr-xr-x  2 root root 12288 Jan 19 13:28 recup_dir.66
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.7
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.8
drwxr-xr-x  2 root root 20480 Jan 19 13:21 recup_dir.9
66 folders were recovered

I calculated the folder size

sudo  du -sh /recovery
13G     /recovery

I immediately started downloading ALL recovered files (13GB) to my local PC with MobaXTerm

On recovery.fearby.com I searched for any trace of recovered *.sql files that I was dumping daily via cron jobs

find /recovery -name "*.sql"

Many pages of recovered sql files were listed

/recovery/recup_dir.49/f36487168.sql
/recovery/recup_dir.49/f35110912.sql
/recovery/recup_dir.49/f36667392.sql
/recovery/recup_dir.49/f36995072.sql
/recovery/recup_dir.49/f35667968.sql
/recovery/recup_dir.49/f35078144.sql
/recovery/recup_dir.49/f37535744.sql
/recovery/recup_dir.49/f36913152.sql
/recovery/recup_dir.49/f33996800.sql
/recovery/recup_dir.49/f36421632.sql
/recovery/recup_dir.49/f35061760.sql
/recovery/recup_dir.49/f36143104.sql
/recovery/recup_dir.49/f36618240.sql
/recovery/recup_dir.49/f34979840.sql
/recovery/recup_dir.49/f37273600.sql
/recovery/recup_dir.49/f35995648.sql
/recovery/recup_dir.49/f36241408.sql
/recovery/recup_dir.49/f37732360.sql
/recovery/recup_dir.49/f34603008.sql
/recovery/recup_dir.49/f33980416.sql
/recovery/recup_dir.1/f0452896.sql
/recovery/recup_dir.1/f0437184.sql
/recovery/recup_dir.1/f0211232.sql
/recovery/recup_dir.17/f16547840.sql
/recovery/recup_dir.17/f16252928.sql
/recovery/recup_dir.17/f15122432.sql
/recovery/recup_dir.17/f17159720.sql
/recovery/recup_dir.17/f15089664.sql
/recovery/recup_dir.17/f15958016.sql
/recovery/recup_dir.17/f15761408.sql
/recovery/recup_dir.57/f69582848.sql
/recovery/recup_dir.57/f69533696.sql
/recovery/recup_dir.57/f69173248.sql
/recovery/recup_dir.57/f68321280.sql
/recovery/recup_dir.57/f70483968.sql
/recovery/recup_dir.57/f70746112.sql
/recovery/recup_dir.57/f68730880.sql
/recovery/recup_dir.57/f67862528.sql
/recovery/recup_dir.57/f70123520.sql
/recovery/recup_dir.57/f68337664.sql
/recovery/recup_dir.57/f70172672.sql
/recovery/recup_dir.57/f71057408.sql
/recovery/recup_dir.57/f68796416.sql
/recovery/recup_dir.57/f70533120.sql
/recovery/recup_dir.57/f69419008.sql
/recovery/recup_dir.57/f68239360.sql
/recovery/recup_dir.57/f69779456.sql
/recovery/recup_dir.57/f68255744.sql
/recovery/recup_dir.57/f67764224.sql
/recovery/recup_dir.57/f71204864.sql
/recovery/recup_dir.57/f70336512.sql
/recovery/recup_dir.57/f68501504.sql
/recovery/recup_dir.57/f67944448.sql
/recovery/recup_dir.50/f39059456.sql
/recovery/recup_dir.50/f38518784.sql
/recovery/recup_dir.50/f40206336.sql
/recovery/recup_dir.50/f40927232.sql
/recovery/recup_dir.50/f39485440.sql
/recovery/recup_dir.50/f39092224.sql
/recovery/recup_dir.50/f40861696.sql
/recovery/recup_dir.50/f39731200.sql
/recovery/recup_dir.50/f40337408.sql
/recovery/recup_dir.50/f38862848.sql
/recovery/recup_dir.50/f41664512.sql
/recovery/recup_dir.50/f41074688.sql
/recovery/recup_dir.50/f40828928.sql
/recovery/recup_dir.50/f41713664.sql
/recovery/recup_dir.50/f38092800.sql
/recovery/recup_dir.50/f39878656.sql
/recovery/recup_dir.50/f38305792.sql
/recovery/recup_dir.50/f38830080.sql
/recovery/recup_dir.50/f39534592.sql
/recovery/recup_dir.50/f39813120.sql
/recovery/recup_dir.50/f40435712.sql
/recovery/recup_dir.50/f41467904.sql
/recovery/recup_dir.50/f37901728.sql
/recovery/recup_dir.50/f38682624.sql
/recovery/recup_dir.50/f38191104.sql
/recovery/recup_dir.50/f38174720.sql
/recovery/recup_dir.50/f40878080.sql
//and many more

It would take a few hours to download 32,000 files from the other side of the world. Next time I deploy fearby.com, I will deploy it to Sydney Australia.

It looks like MobaXTerm does not copy to the selected destination that I selected when dragging and dropping files, Being impatient I scanned my system for a file that MobaXTerm had copied. MobaXTerm saves download to “%Documents%\MobaXterm\splash\tmp\dragdrop\”.

Sql file contents

I opened some of the recovered SQL files and it looks like all files were partial and were not the whole database backup. A compete mysql dump should be over 200 lines long. Dang.

I downloaded all files I could from the these folders

  • /mnt/etc/nginx
  • /mnt/var/lib/mysql
  • /mnt/Scripts

Disappointed, I sat on things for a few weeks, I thought I had lost my website.

Try 2 – Reinstall a fresh db.fearby.com

After I copied files from the old db.fearby.com serve to recovery.fearby.com, I reattached the storage to the old db.fearby.com system.

In vein, I tried mending the broken MySQL service on db.fearby.com. I tried uninstalling and reinstalling MySql on db.fearby.com (each time no luck). I was having trouble with MySQL not starting.

I had too many rabbit holes (mysql errors) to list. I thought my database was corrupt.

I was kicking myself for letting my access to db.fearby.com lapse, I was kicking myself for not having more backups.

Try 3 – Reinstall MySQL

I tried deploying a new server and setting it up, maybe from the frustration I did not do it correctly. I had HTTPS issues with CloudFlare. I gave up for a few months

Try 4 – Check for Database Corruption

I downloaded DiskInternals – MySQL Recovery and scanned my database. To my amazement, it reported no database corruption.

All tables loaded

Maybe I can recover my website?

Try 5 – Using RunCloud.io to deploy a website

Having failed with setting up a server from scratch, a friend (Hi Zach) said I should try RunCloud to deploy a server.

I tried to document everything from attaching RunCloud to UpCloud and Cloudflare’s API to deploying a server.

Long story short, RunCloud is not for me.  I had too many issues with RunCloud and IPV6, CloudFlare API Integration, No SSH access to my server, no Nginx editing capabilities with RunCloud.

RunCloud errors

I deleted the RunCloud deployed server.

Try 6 – 10 minutes deploying a serer by hand

I ended up deploying a server in 10 minutes manually without taking notes.

Summary

I deployed a server (this time to Sydney).

I Installed the UFW firewall (configured and started)

sudo apt-get install ntp

Set the date and time

sudo timedatectl set-timezone Australia/Sydney

Installed ntp time server

sudo apt-get install ntp

I Installed PHP 7.4 and PHP 7.4 FPM (I cheated and googled: https://www.cloudbooklet.com/install-php-7-4-on-debian-10/ )

I edited PHP Config

sudo nano /etc/php/7.4/fpm/php.ini

Installed NGINX webserver 

sudo apt-get install nginx

Configured NGINX (I got an CLoudflare to NGINX SSL Certificate), I also sighned up for a Cloud flare SSL certificate

sudo nano /etc/nginx/nginx.conf

Contents

worker_cpu_affinity auto;
worker_rlimit_nofile 100000;
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
         multi_accept on;
}

http {
        root /www-root;
        client_max_body_size 10M;

        proxy_connect_timeout 1200s;
        proxy_send_timeout 1200s;
        proxy_read_timeout 1200s;
        fastcgi_send_timeout 1200s;
        fastcgi_read_timeout 1200s;

        ##
        # Basic Settings
        ##
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;

        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##


        gzip on;
        gzip_disable "msie6";

        gzip_vary on;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;

        gzip_types text/plain application/xml;
        gzip_min_length 256;
        gzip_proxied no-cache no-store private expired auth;


        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

I edited sudo nano /etc/nginx/sites-available/default

Contents

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # SSL configuration
        #
        listen 443 ssl default_server;
        listen [::]:443 ssl default_server;

        ssl on;
        ssl_certificate /path/to/ssl/certs/cert.pem;
        ssl_certificate_key /path/to/ssl/private/key.pem;

        root /path-to-www;

        # Add index.php to the list if you are using PHP
        
        index index.html index.php;

        server_name fearby.com;

        #Security Headers
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header Referrer-Policy "no-referrer-when-downgrade";
        add_header X-XSS-Protection "1; mode=block" always;
        add_header X-Frame-Options SAMEORIGIN always;
        add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # DENY RULES
        location ~ /\.ht {
                deny all;
        }
        location ~ ^/\.user\.ini {
                deny all;
        }
        location ~ (\.ini) {
                return 403;
        }

        if ($http_referer ~* "laptrinhx.com") {
                return 404;
        }

        if ($http_referer ~* "bdev.dev") {
                return 404;
        }

        if ($http_referer ~* "raoxyz.com") {
                return 404;
        }

        if ($http_referer ~* "congtyaz.com") {
                return 404;
        }


        location / {
                try_files $uri $uri/ /index.php?$args;
        }

        location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/run/php/php7.4-fpm.sock;
        }

        # DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;
}

I tested Nginx and PHP.

I installed MySQL (I Googled a guide)

I created a database, database user and assigned permissions for my blog.

I installed the WordPress CLI tool

I installed WordPress the using the wp-cli tool

wp core install --url=example.com --title=Example --admin_user=supervisor --admin_password=strongpassword [email protected]

When I had a blank WordPress I uploaded the blog folder that I backed up before.

I ran this command to allow Nginx to read the backed up website files

sudo chown -R www-data:www-data /path-to-www

I also uploaded the backup of my mysql database to /var/lib/mysql/oldblogdatabase

I ran this command to allow mysql to read the backed up database

sudo chown -R mysql:mysql /var/lib/mysql/oldblogdatabase

I also uploaded the following files to /var/lib/mysql

TIP: These files are very important to restore.  You cannot just copy a database in a subfolder.

  • ibdata1
  • ib_logfile0
  • ib_logfile1
  • ib_logfile1
  • ibtmp1

I ran this command to allow mysql ro read the ib* files

sudo chown -R mysql:mysql /var/lib/mysql/ib*

I was able to load my old website

Blog Up

I still have some issues to solve but it is back.

Lessons Learned

  1. Save all passwords and have backup accounts and roll back before working backups are gone.
  2. Setup a Dev Test, Pre Prod Environment and do no test on production servers.
  3. Do not delay disaster recovery actions.
  4. Do not rely on automation.
  5. Have more than a weeks worth of backups.

Get $25 free credit on UpCloud and deploy your own server: Use this link to get $25 credit (new UpCloud users only).

 

Change Log

Version 1.2

Filed Under: Uncategorized Tagged With: 404, MySQL, nginx, php, website

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.

December 22, 2020 by Simon

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance. Here is what I did to set up a complete Ubuntu 18.04 system (NGINX, PHP, MySQL, WordPress etc). This is not a paid review (just me documenting my steps over 2 days).

Background (CPanel hosts)

In 1999 I hosted my first domain (www.fearby.com) on a host in Seattle (for $10 USD a month), the host used CPanel and all was good.  After a decade I was using the domain more for online development and the website was now too slow (I think I was on dial-up or ADSL 1 at the time). I moved my domain to an Australian host (for $25 a month).

After 8 years the domain host was sold and performance remained mediocre. After another year the new host was sold again and performance was terrible.

I started receiving Resource Limit Is Reached warnings (basically this was a plot by the new CPanel host to say “Pay us more and this message will go away”).

Page load times were near 30 seconds.

cpenal_usage_exceeded

The straw that broke the camel’s back was their demand of $150/year for a dodgy SSL certificate.

I needed to move to a self-managed server where I was in control.

Buying a Domain Name

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Self Managed Server

I found a good web IDE ( http://www.c9.io/ ) that allowed me to connect to a cloud VM.  C9 allowed me to open many files and terminal windows and reconnect to them later. Don’t get excited, though, as AWS has purchased C9 and it’s not the same.

C9 IDE

C9 IDE

I spun up a Digital Ocean Server at the closest data centre in Singapore. Here was my setup guide creating a Digital Ocean VM, connecting to it with C9 and configuring it. I moved my email to G Suite and moved my WordPress to Digital Ocean (other guides here and here).

I was happy since I could now send emails via CLI/code, set up free SSL certs, add second domain email to G Suite and Secure G Suite. No more usage limit errors either.

Self-managing servers require more work but it is more rewarding (flexible, faster and cheaper).  Page load times were now near 20 seconds (10-second improvement).

Latency Issue

Over 6 months, performance on Digital Ocean (in Singapore) from Australia started to drop (mentioned here).  I tried upgrading the memory but that did not help (latency was king).

Moved the website to Australia

I moved my domain to Vultr in Australia (guide here and here). All was good for a year until traffic growth started to increase.

Blog Growth

I tried upgrading the memory on Vultr and I setup PHP child workers, set up Cloudflare.

GT Metrix scores were about a “B” and Google Page Speed Scores were in the lower 40’s. Page loads were about 14 seconds (5-second improvement).

Tweaking WordPress

I set up an image compression plugin in WordPress then set up a cloud image compression and CDN Plugin from the same vendor.  Page Speed info here.

GT Metrix scores were now occasionally an “A” and Page Speed scores were in the lower 20’s. Page loads were about 3-5 seconds (10-second improvement).

A mixed bag from Vultr (more optimisation and performance improvements were needed).

This screenshot is showing poor www.gtmetrix.com scores , pool google page speed index scores and upgrading from 1GB to 2GB memory on my server.

Google Chrome Developer Console Audit Results on Vultr hosted website were not very good (I stopped checking as nothing helped).

This is a screenshot showing poor site performance (screenshot taken in Google Dev tools audit feature)

The problem was the Vultr server (400km away in Sydney) was offline (my issue) and everything above (adding more memory, adding 2x CDN’s (EWWW and Cloudflare), adding PHP Child workers etc) did not seem to help???

Enter UpCloud…

Recently, a friend sent a link to a blog article about a host called “UpCloud” who promised “Faster than SSD” performance.  This can’t be right: “Faster than SSD”? I was intrigued. I wanted to check it out as I thought nothing was faster than SSD (well, maybe RAM).

I signed up for a trial and ran a disk IO test (read the review here) and I was shocked. It’s fast. Very fast.

Summary: UpCloud was twice as fast (Disk IO and CPU) as Vultr (+ an optional $4/m firewall and $3/m for 1x backup).

This is a screenshot showing Vultr.com servers getting half the read and write disk io performance compared to upcloud.com.

fyi: Labels above are K Bytes per second. iozone loops through all file size from 4 KB to 16,348 KB and measures through the reads per second. To be honest, the meaning of the numbers doesn’t interest me, I just want to compare apples to apples.

This is am image showing iozone results breakdown chart (kbytes per sec on vertical axis, file size in horizontal axis and transfer size on third access)

(image snip from http://www.iozone.org/ which explains the numbers)

I might have to copy my website on UpCloud and see how fast it is.

Where to Deploy and Pricing

UpCloud Pricing: https://www.upcloud.com/pricing/

UpCloud Pricing

UpCloud does not have a data centre in Australia yet so why choose UpCloud?

Most of my site’s visitors are based in the US and UpCloud have disk IO twice as fast as Vultr (win-win?).  I could deploy to Chicago?

This image sows most of my visitors are in the US

My site’s traffic is growing and I need to ensure the site is fast enough in the future.

This image shows that most of my sites visitors are hitting my site on week days.

Creating an UpCloud VM

I used a friend’s referral code and signed up to create my first VM.

FYI: use my Referral code and get $25 free credit.  Sign up only takes 2 minutes.

https://www.upcloud.com/register/?promo=D84793

When you click the link above you will receive 25$ to try out serves for 3 days. You can exit his trail and deposit $10 into UpCloud.

Trial Limitations

The trial mode restrictions are as following:

* Cloud servers can only be accessed using SSH, RDP, HTTP or HTTPS protocols
* Cloud servers are not allowed to send outgoing e-mails or to create outbound SSH/RDP connections
* The internet connection is restricted to 100 Mbps (compared to 500 Mbps for non-trial accounts)
* After your 72 hours free trial, your services will be deleted unless you make a one-time deposit of $10

UpCloud Links

The UpCloud support page is located here: https://www.upcloud.com/support/

  • Quick start: Introduction to UpCloud
  • How to deploy a Cloud Server
  • Deploy a cloud server with UpCloud’s API

More UpCloud links to read:

  • Two-Factor Authentication on UpCloud
  • Floating IPs on UpCloud
  • How to manage your firewall
  • Finalizing deployment

Signing up to UpCloud

Navigate to https://upcloud.com/signup and add your username, password and email address and click signup.

New UpCloud Signup Page

Add your address and payment details and click proceed (you don’t need to pay anything ($1 may be charged and instantly refunded to verify the card)

Add address and payment details

That’s it, check yout email.

Signup Done

Look for the UpCloud email and click https://my.upcloud.com/

Check Email

Now login

Login to UpCloud

Now I can see a dashboard 🙂

UpCloud Dashboard

I was happy to see 24/7 support is available.

This image shows the www.upcloud.com live chat

I opted in for the new dashboard

UpCloud new new dashboard

Deploy My First UpCloud Server

This is how I deployed a server.

Note: If you are going to deploy a server consider using my referral code and get $25 credit for free.

Under the “deploy a server” widget I named the server and chose a location (I think I was supposed to use an FQDN name -e.g., “fearby.com”). The deployment worked though. I clicked continue, then more options were made available:

  1. Enter a short server description.
  2. Choose a location (Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago)
  3. Choose the number of CPU’s and amount of memory
  4. Specify disk number/names and type (MaxIOPS or HDD).
  5. Choose an Operating System
  6. Select a Timezone
  7. Define SSH Keys for access
  8. Allowed login methods
  9. Choose hardware adapter types
  10. Where the send the login password

Deploy Server

FYI: How to generate a new SSH Key (on OSX or Ubuntu)

ssh-keygen -t rsa

Output

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /temp/example_rsa
Enter passphrase (empty for no passphrase): *********************************
Enter same passphrase again:*********************************
Your identification has been saved in /temp/example_rsa.
Your public key has been saved in /temp/example_rsa.pub.
The key fingerprint is:
SHA256:########################### [email protected]
Outputted public and private key

Did the key export? (yes)

> /temp# ls /temp/ -al
> drwxr-xr-x 2 root root 4096 Jun 9 15:33 .
> drwxr-xr-x 27 root root 4096 Jun 8 14:25 ..
> -rw——- 1 user user 1766 Jun 9 15:33 example_rsa
> -rw-r–r– 1 user user 396 Jun 9 15:33 example_rsa.pub

“example_rsa” is the private key and “example_rsa.pub “is the public key.

  • The public key needs to be added to the server to allow access.
  • The private key needs to be added to any local ssh program used for remote access.

Initialisation script (after deployment)

I was pleased to see an initialization script section that calls actions after the server is deployed. I configured the initialisation script to pull down a few GB of backups from my Vultr website in Sydney (files now removed).

This was my Initialisation script:

#!/bin/bash
echo "Downloading the Vultr websites backups"
mkdir /backup
cd /backup
wget -o www-mysql-backup.sql https://fearby.com/.../www-mysql-backup.sql
wget -o www-blog-backup.zip https://fearby.com/.../www-blog-backup.zip

Confirm and Deploy

I clicked “Confirm and deploy” but I had an alert that said trial mode can only deploy servers up to 1024MB of memory.

This image shows I cant deploy servers with 2/GB in trial modeExiting UpCloud Trial Mode

I opened the dashboard and clicked My Account then Billing, I could see the $25 referral credit but I guess I can’t use that in Trial.

I exited trial mode by depositing $10 (USD).

View Billing Details

Make a manual 1-time deposit of $10 to exit trial mode.

Deposit $10 to exit the trial

FYI: Server prices are listed below (or view prices here).

UpCloud Pricing

Now I can go back and deploy the server with the same settings above (1x CPU, 2GB Memory, Ubuntu 18.04, MaxIOPS Storage etc)

Deployment takes a few minutes and depending on how you specified a password may be emailed to you.

UpCloud Server Deployed

The server is now deployed; now I can connect to it with my SSH program (vSSH).  Simply add the server’s IP, username, password and the SSH private key (generated above) to your ssh program of choice.

fyi: The public key contents start with “ssh-rsa”.

This image shows me connecting to my sever via ssh

I noticed that the initialisation script downloaded my 2+GB of files already. Nice.

UpCloud Billing Breakdown

I can now see on the UpCloud billing page in my dashboard that credit is deducted daily (68c); at this rate, I have 49 days credit left?

Billing Breakdown

I can manually deposit funds or set up automatic payments at any time 🙂

UpCloud Backup Options

You do not need to setup backups but in case you want to roll back (if things stuff up), it is a good idea. Backups are an additional charge.

I have set up automatic daily backups with an auto deletion after 2 days

To view backup scheduled click on your deployed server then click backup

List of UpCloud Backups

Note: Backups are charged at $0.056 for every GB stored – so $5.60 for every 100GB per month (half that for 50GB etc)

You can take manual backups at any time (and only be charged for the hour)

UpCloud Firewall Options

I set up a firewall at UpCloud to only allow the minimum number of ports (UpCloud DNS, HTTP, HTTPS and My IP to port 22).  The firewall feature is charged at $0.0056 an hour ($4.03 a month)

I love the ability to set firewall rules on incoming, destination and outgoing ports.

To view your firewall click on your deployed server then click firewall

UpCloud firewall

Update: I modified my firewall to allow inbound ICMP (IPv4/IPv6) and UDP (IPv4/IPv6) packets.

(Note: Old firewall screenshot)

Firewall Rules Allow port 80, 443 and DNS

Because my internet provider has a dynamic IP, I set up a VPN with a static IP and whitelisted it for backdoor access.

Local Ubuntu ufw Firewall

I duplicated the rules in my local ufw (2nd level) firewall (and blocked mail)

sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] 25                         DENY OUT    Anywhere                   (out)
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 22                         ALLOW IN    REMOVED (MY WHITELISTED IP))
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 25 (v6)                    DENY OUT    Anywhere (v6)              (out)
[10] 53                         ALLOW IN    2a04:3540:53::1
[11] 53                         ALLOW IN    2a04:3544:53::1

UpCloud Download Speeds

I pulled down a 1.8GB Ubuntu 18.08 Desktop ISO 3 times from gigenet.com and the file downloaded in 32 seconds (57MB/sec). Nice.

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:04-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso'

ubuntu-18.04-desktop-amd64.iso 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:02:37 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:46-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.1'

ubuntu-18.04-desktop-amd64.iso.1 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:19 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.1' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:03:23-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.2'

ubuntu-18.04-desktop-amd64.iso.2 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:56 (56.8 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.2' saved [1921843200/1921843200]

Install Common Ubuntu Packages

I installed common Ubuntu packages.

apt-get install zip htop ifstat iftop bmon tcptrack ethstatus speedometer iozone3 bonnie++ sysbench siege tree tree unzip jq jq ncdu pydf ntp rcconf ufw iperf nmap iozone3

Timezone

I checked the server’s time (I thought this was auto set before I deployed)?

$hwclock --show
2018-06-06 23:52:53.639378+0000

I reset the time to Australia/Sydney.

dpkg-reconfigure tzdata
Current default time zone: 'Australia/Sydney'
Local time is now: Thu Jun 7 06:53:20 AEST 2018.
Universal Time is now: Wed Jun 6 20:53:20 UTC 2018.

Now the timezone is set 🙂

Shell History

I increased the shell history.

HISTSIZEH =10000
HISTCONTROL=ignoredups

SSH Login

I created a ~/.ssh/authorized_keys file and added my SSH public key to allow password-less logins.

mkdir ~/.ssh
sudo nano ~/.ssh/authorized_keys

I added my pubic ssh key, then exited the ssh session and logged back in. I can now log in without a password.

Install NGINX

apt-get install nginx

nginx/1.14.0 is now installed.

A quick GT Metrix test.

This image shows awesome static nginx performance ratings of of 99%

Install MySQL

Run these commands to install and secure MySQL.

apt install mysql-server
mysql_secure_installation

Securing the MySQL server deployment.
> Would you like to setup VALIDATE PASSWORD plugin?: n
> New password: **********************************************
> Re-enter new password: **********************************************
> Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
> Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
> Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
> Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
> Success.

I disabled the validate password plugin because I hate it.

MySQL Ver 14.14 Distrib 5.7.22 is now installed.

Set MySQL root login password type

Set MySQL root user to authenticate via “mysql_native_password”. Run the “mysql” command.

mysql
SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | | auth_socket | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+----------

Now let’s set the root password authentication method to “mysql_native_password”

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '*****************************************';
Query OK, 0 rows affected (0.00 sec)

Check authentication method.

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | ######################################### | mysql_native_password | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+

Now we need to flush permissions.

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Done.

Install PHP

Install PHP 7.2

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php7.2
php -v

PHP 7.2.5, Zend Engine v3.2.0 with Zend OPcache v7.2.5-1 is now installed. Do update PHP frequently.

I made the following changes in /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Install PHP FPM

apt-get install php7.2-fpm

Configure PHP FPM config.

Edit /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Reload php sudo service.

php7.2-fpm restart service php7.2-fpm status

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Configuring NGINX

If you are not comfortable editing NGINX config files read here, here and here.

I made a new “www root” folder, set permissions and created a default html file.

mkdir /www-root
chown -R www-data:www-data /www-root
echo "Hello World" >> /www-root/index.html

I edited the “root” key in “/etc/nginx/sites-enabled/default” file and set the root a new location (e.g., “/www-root”)

I added these performance tweaks to /etc/nginx/nginx.conf

> worker_cpu_affinity auto;
> worker_rlimit_nofile 100000

I add the following lines to “http {” section in /etc/nginx/nginx.conf

client_max_body_size 10M;

gzip on;
gzip_disable "msie6";
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_types
application/atom+xml
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
font/opentype
image/bmp
image/x-icon
text/cache-manifest
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
#text/html is always compressed by gzip module

gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss te$

Check NGINX Status

service nginx status
* nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 21:16:28 AEST; 30min ago
Docs: man:nginx(8)
Main PID: # (nginx)
Tasks: 2 (limit: 2322)
CGroup: /system.slice/nginx.service
|- # nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
`- # nginx: worker process

Install Open SSL that supports TLS 1.3

This is a work in progress. The steps work just fine for me on Ubuntu 16.04. but not Ubuntu 18.04.?

Installing Adminer MySQL GUI

I will use the PHP based Adminer MySQL GUI to export and import my blog from one server to another. All I needed to do is install it on both servers (simple 1 file download)

cd /utils
wget -o adminer.php https://github.com/vrana/adminer/releases/download/v4.6.2/adminer-4.6.2-mysql-en.php

Use Adminer to Export My Blog (on Vultr)

On the original server open Adminer (http) and..

  1. Login with the MySQL root account
  2. Open your database
  3. Choose “Save” as the output
  4. Click on Export

This image shows the export of the wordpress adminer page

Save the “.sql” file.

I used Adminer on the UpCloud server to Import My Blog

FYI: Depending on the size of your database backup you may need to temporarily increase your upload and post sizes limits in PHP and NGINX before you can import your database.

Edit /etc/php/7.2/fpm/php.ini
> max_file_uploads = 100M
> post_max_size =100M

And Edit: /etc/nginx/nginx.conf
> client_max_body_size 100M;

Don’t forget to reload NGINX config and restart NGINX and PHP. Take note of the maximum allowed file size in the screenshot below. I temporarily increased my upload limits to 100MB in order to restore my 87MB blog.

Now I could open Adminer on my UpCloud server.

  1. Create a new database
  2. Click on the database and click Import
  3. Choose the SQL file
  4. Click Execute to import it

Import MuSQL backup with Adminer

Don’t forget to create a user and assign permissions (as required – check your wp-config.php file).

Import MySQL Database

Tip: Don’t forget to lower the maximum upload file size and max post size after you import your database,

Cloudflare DNS

I use Cloudflare to manage DNS, so I need to tell it about my new server.

You can get your server’s IP details from the UpCloud dashboard.

Find IP

At Cloudflare update your DNS details to point to the server’s new IPv4 (“A Record”) and IPv6 (“AAAA Record”).

Cloudflare DNS

Domain Error

I waited an hour and my website was suddenly unavailable.  At first, I thought this was Cloudflare forcing the redirection of my domain to HTTP (that was not yet set up).

DNS Not Replicated Yet

I chatted with UpCloud chat on their webpage and they kindly assisted me to diagnose all the common issues like DNS values, DNS replication, Cloudflare settings and the error was pinpointed to my NGINX installation.  All NGINX config settings were ok from what we could see?  I uninstalled NGINX and reinstalled it (and that fixed it). Thanks UpCloud Support 🙂

Reinstalled NGINX

sudo apt-get purge nginx nginx-common

I reinstalled NGINX and reconfigured /etc/nginx/nginx.conf (I downloaded my SSL cert from my old server just in case).

Here is my /etc/nginx/nginx.conf file.

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/www-nginxcriterror.log crit;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        client_max_body_size 10M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        server_tokens off;

        server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/www-access.log;
        error_log /var/log/nginx/www-error.log;

        gzip on;

        gzip_vary on;
        gzip_disable "msie6";
        gzip_min_length 256;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-available/default file (fyi, I have not fully re-setup TLS 1.3 yet so I commented out the settings)

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-root;

        # Listen Ports
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name www.fearby.com fearby.com localhost;

        # HTTPS Cert
        ssl_certificate /etc/nginx/ssl-cert-path/fearby.crt;
        ssl_certificate_key /etc/nginx/ssl-cert-path/fearby.key;
        ssl_dhparam /etc/nginx/ssl-cert-path/dhparams4096.pem;

        # HTTPS Ciphers
        
        # TLS 1.2
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

        # TLS 1.3			#todo
        # ssl_ciphers 
        # ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DES-CBC3-SHA;
        # ssl_ecdh_curve secp384r1;

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # HTTPS Settings
        server_tokens off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 30m;
        ssl_session_tickets off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
	#ssl_stapling on; 						# Requires nginx >= 1.3.7

        # Cloudflare DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;

        # PHP Memory 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
            try_files $uri =404;
            # include snippets/fastcgi-php.conf;

            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php7.2-fpm.sock;

            # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
            # fastcgi_pass 127.0.0.1:9000;
	    }

        location / {
            # try_files $uri $uri/ =404;
            try_files $uri $uri/ /index.php?q=$uri&$args;
            index index.php index.html index.htm;
            proxy_set_header Proxy "";
        }

        # Deny Rules
        location ~ /.ht {
                deny all;
        }
        location ~ ^/.user.ini {
            deny all;
        }
        location ~ (.ini) {
            return 403;
        }

        # Headers
        location ~* .(?:ico|css|js|gif|jpe?g|png|js)$ {
            expires 30d;
            add_header Pragma public;
            add_header Cache-Control "public";
        }

}

SSL Labs SSL Certificate Check

All good thanks to the config above.

SSL Labs

Install WP-CLI

I don’t like setting up FTP to auto-update WordPress plugins. I use the WP-CLI tool to manage WordPress installations by the command line. Read my blog here on using WP-CLI.

Download WP-CLI

mkdir /utils
cd /utils
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Move WP-CLI to the bin folder as “wp”

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Test wp

wp --info
OS: Linux 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
Shell: /bin/bash
PHP binary: /usr/bin/php7.2
PHP version: 7.2.5-1+ubuntu18.04.1+deb.sury.org+1
php.ini used: /etc/php/7.2/cli/php.ini
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /www-root
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.1

Update WordPress Plugins

Now I can run “wp plugin update” to update all WordPress plugins

wp plugin update
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/plugin/wordfence.7.1.7.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wp-meta-seo.3.7.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wordpress-seo.7.6.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Disabling Maintenance mode...
Success: Updated 3 of 3 plugins.
+---------------+-------------+-------------+---------+
| name | old_version | new_version | status |
+---------------+-------------+-------------+---------+
| wordfence | 7.1.6 | 7.1.7 | Updated |
| wp-meta-seo | 3.7.0 | 3.7.1 | Updated |
| wordpress-seo | 7.5.3 | 7.6.1 | Updated |
+---------------+-------------+-------------+---------+

Update WordPress Core

WordPress core file can be updated with “wp core update“

wp core update
Success: WordPress is up to date.

Troubleshooting: Use the flag “–allow-root “if wp needs higher access (unsafe action though).

Install PHP Child Workers

I edited the following file to setup PHP child workers /etc/php/7.2/fpm/pool.d/www.conf

Changes

> pm = dynamic
> pm.max_children = 40
> pm.start_servers = 15
> pm.min_spare_servers = 5
> pm.max_spare_servers = 15
> pm.process_idle_timeout = 30s;
> pm.max_requests = 500;
> php_admin_value[error_log] = /var/log/www-fpm-php.www.log
> php_admin_value[memory_limit] = 512M

Restart PHP

sudo service php7.2-fpm restart

Test NGINX config, reload NGINX config and restart NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Output (14 workers are ready)

Check PHP Child Worker Status

sudo service php7.2-fpm status
* php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 19:32:47 AEST; 20s ago
Docs: man:php-fpm7.2(8)
Main PID: # (php-fpm7.2)
Status: "Processes active: 0, idle: 15, Requests: 2, slow: 0, Traffic: 0.1req/sec"
Tasks: 16 (limit: 2322)
CGroup: /system.slice/php7.2-fpm.service
|- # php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
- # php-fpm: pool www

Memory Tweak (set at your own risk)

sudo nano /etc/sysctl.conf

vm.swappiness = 1

Setting swappiness to a value of 1 all but disables the swap file and tells the Operating System to aggressively use ram, a value of 10 is safer. Only set this if you have enough memory available (and free).

Possible swappiness settings:

> vm.swappiness = 0 Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition when free memory will be below vm.min_free_kbytes limit, but in later versions, this is achieved by setting to 1.[2]> vm.swappiness = 1 Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
> vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system.[3]
> vm.swappiness = 60 The default value.
> vm.swappiness = 100 The kernel will swap aggressively.

The “htop” tool is a handy memory monitoring tool to “top”

Also, you can use good old “watch” command to show near-live memory usage (auto-refreshes every 2 seconds)

watch -n 2 free -m

Script to auto-clear the memory/cache

As a habit, I am setting up a cronjob to check when free memory falls below 100MB, then the cache is automatically cleared (freeing memory).

Script Contents: clearcache.sh

#!/bin/bash

# Script help inspired by https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
ram_use=$(free -m)
IFS=

I set the cronjob to run every 15 mins, I added this to my cronjob.

SHELL=/bin/bash
*/15  *  *  *  *  root /bin/bash /scripts/clearcache.sh >> /scripts/clearcache.log

Sample log output

2018-06-10 01:13:22 RAM OK (Total: 1993 MB, Used: 981 MB, Free: 387 MB)
2018-06-10 01:15:01 RAM OK (Total: 1993 MB, Used: 974 MB, Free: 394 MB)
2018-06-10 01:20:01 RAM OK (Total: 1993 MB, Used: 955 MB, Free: 412 MB)
2018-06-10 01:25:01 RAM OK (Total: 1993 MB, Used: 1002 MB, Free: 363 MB)
2018-06-10 01:30:01 RAM OK (Total: 1993 MB, Used: 970 MB, Free: 394 MB)
2018-06-10 01:35:01 RAM OK (Total: 1993 MB, Used: 963 MB, Free: 400 MB)
2018-06-10 01:40:01 RAM OK (Total: 1993 MB, Used: 976 MB, Free: 387 MB)
2018-06-10 01:45:01 RAM OK (Total: 1993 MB, Used: 985 MB, Free: 377 MB)
2018-06-10 01:50:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 379 MB)
2018-06-10 01:55:01 RAM OK (Total: 1993 MB, Used: 979 MB, Free: 382 MB)
2018-06-10 02:00:01 RAM OK (Total: 1993 MB, Used: 980 MB, Free: 380 MB)
2018-06-10 02:05:01 RAM OK (Total: 1993 MB, Used: 971 MB, Free: 389 MB)
2018-06-10 02:10:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 376 MB)
2018-06-10 02:15:01 RAM OK (Total: 1993 MB, Used: 967 MB, Free: 392 MB)

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

sudo cat /proc/sys/vm/swappiness
1

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may lose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

tune2fs -o journal_data_writeback /dev/vda1

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

sudo nano /etc/fstab

I added “writeback,noatime,nodiratime” flags to my disk options.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options> <dump>  <pass>
# / was on /dev/vda1 during installation
#                <device>                 <dir>           <fs>    <options>                                             <dump>  <fsck>
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro,data=writeback,noatime,nodiratime   0       1

Updating Ubuntu Packages

Show updatable packages.

apt-get -s dist-upgrade | grep "^Inst"

Update Packages.

sudo apt-get update && sudo apt-get upgrade

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

sudo apt-get install unattended-upgrades

Enable Unattended Upgrades.

sudo dpkg-reconfigure --priority=low unattended-upgrades

Now I configure what packages not to auto-update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

Unattended-Upgrade::Package-Blacklist {
"nginx";
"nginx-common";
"nginx-core";
"php7.2";
"php7.2-fpm";
"mysql-server";
"mysql-server-5.7";
"mysql-server-core-5.7";
"libssl1.0.0";
"libssl1.1";
};

FYI: You can find installed packages by running this command:

apt list --installed

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

unattended-upgrade -d

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows creating democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepaid model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user-defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

Disable dasjhicons.min.css (for unauthenticated WordPress users).

Find functions.php in the www root

sudo find . -print |grep  functions.php

Edit functions.php

sudo nano ./wp-includes/functions.php

Add the following

// Remove dashicons in frontend for unauthenticated users
add_action( 'wp_enqueue_scripts', 'bs_dequeue_dashicons' );
function bs_dequeue_dashicons() {
    if ( ! is_user_logged_in() ) {
        wp_deregister_style( 'dashicons' );
    }
}

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers

server {
        root /www;

        ...
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;
        ...

I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

location = /http2/push_demo.html {
        http2_push /http2/pushed.css;
        http2_push /http2/pushedimage1.jpg;
        http2_push /http2/pushedimage2.jpg;
        http2_push /http2/pushedimage3.jpg;
}

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

location / {
        ...
        http2_push /https://fearby.com/wp-includes/js/jquery/jquery.js;
        http2_push /wp-content/themes/news-pro/images/favicon.ico;
        ...
}

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

WordPress wp-config.php tweaks

# Memory
define('WP_MEMORY_LIMIT','1024M');
define('WP_MAX_MEMORY_LIMIT','1024M');
set_time_limit (60);

# Security
define( 'FORCE_SSL_ADMIN', true);

# Disable Updates
define( 'WP_AUTO_UPDATE_CORE', false );
define( 'AUTOMATIC_UPDATER_DISABLED', true );

# ewww.io
define( 'WP_AUTO_UPDATE_CORE', false );

Add 2FA Authentication to server logins.

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress too.

Tweaks Todo

  • Compress placeholder BJ Lazy Load Image (plugin is broken)
  • Solve 2x Google Analytics tracker redirects (done, switched to Matomo)

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

Results

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

n' read -rd '' -a ram_use_arr <<< "$ram_use" ram_use="${ram_use_arr[1]}" ram_use=$(echo "$ram_use" | tr -s " ") IFS=' ' read -ra ram_use_arr <<< "$ram_use" ram_total="${ram_use_arr[1]}" ram_used="${ram_use_arr[2]}" ram_free="${ram_use_arr[3]}" d=`date '+%Y-%m-%d %H:%M:%S'` if ! [[ "$ram_free" =~ ^[0-9]+$ ]]; then echo "Sorry ram_free is not an integer" else if [ "$ram_free" -lt "100" ]; then echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB) - Clearing Cache..." sync; echo 1 > /proc/sys/vm/drop_caches sync; echo 2 > /proc/sys/vm/drop_caches #sync; echo 3 > /proc/sys/vm/drop_caches #Not advised in production # Read for more info https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ exit 1 else if [ "$ram_free" -lt "256" ]; then echo "$d RAM ALMOST LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else if [ "$ram_free" -lt "512" ]; then echo "$d RAM OK (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 fi fi fi fi

I set the cronjob to run every 15 mins, I added this to my cronjob.

 

Sample log output

 

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

 

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

 

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may loose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

 

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

 

I added “writeback,noatime,nodiratime” flags to my disk options.

 

Updating Ubuntu Packages

Show updatable packages.

 

Update Packages.

 

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

 

Enable Unattended Upgrades.

 

Now I configure what packages not to auto update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

 

FYI: You can find installed packages by running this command:

 

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

 

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows to create democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepay model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

2FA Authentication at login

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress aswel.

Performance

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

Results

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

I hope this guide helps someone.

Free Credit

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.2 Converting to Blocks

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

Filed Under: CDN, Cloud, Cloudflare, Cost, CPanel, Digital Ocean, DNS, Domain, ExactDN, Firewall, Hosting, HTTPS, MySQL, MySQLGUI, NGINX, Performance, PHP, php72, Scalability, TLS, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: draft, GTetrix, host, IOPS, Load Time, maxIOPS, MySQL, nginx, Page Speed Insights, Performance, php, SSD, ubuntu, UpCloud, vm

How to backup and restore a MySQL database on Windows and Linux

April 21, 2019 by Simon

Why backup and restore

This is a quick guide demonstrating how you can backup and restore a MySQL database on Windows and Linux using Adminer.

You may need to know how to backup a restore a database for a number of reasons..

e.g

  • Send the database to someone to debug or give feedback while learning.
  • Move the database from a local machine to the cloud
  • Move the database from cloud vendor A to cloud vendor B
  • etc.

Having a backup of the VM is good but having a backup of the database too is better. I use UpCloud for hosting my VM’s and setting backups is easy. But I cannot download those backups.

UpCloud Backup Screen

Murphy’s Law

“If anything can go wrong, it will”

The most important reason for taking a backup and knowing how to restore it is for disaster recovery reasons.

Backup (the easiest way) with Adminer

Adminer is a free PHP based IDE for MySQL and other databases. Simply install Adminer and save the file on your local computer or remote web server directory.

FYI: The Adminer author Jakub Vrana has a patron page, I am a patron of this awesome software.

Snip from Adminers website. “Adminer (formerly phpMinAdmin) is a full-featured database management tool written in PHP. Conversely to phpMyAdmin, it consist of a single file ready to deploy to the target server. Adminer is available for MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, Firebird, SimpleDB, Elasticsearch andMongoDB.”

adminer.php file icon screenshot

TIP: The file would be publicly accessible to anyone so don’t save it to a common area, obfuscate the file, protect it of delete the file when you are done using it.

Once Adminer is installed load it in a web browser, login with your MySQL credentials. Once you login you will see all databases and an Import and Export menu.

Adminer main screen, all databases and import and export menu.

tbtest is a simple database with one table and 4 fields (ID, Key, Value and Modified)

.Click Export to open the export screen.

Export screen showing a list of databases and export options

Click Export, a SQL file will be generated (this is the export of the database).

Here is a save of the file:
https://fearby.com/wp-content/uploads/export.txt

Exported view of https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html

Its that simple.

If I add a binary blob file to the table and upload a PNG file lets see how the export looks.

Screenshot o the new table with a blog field in Adminer UI

Let export the database again in Adminer and check out the output. I used Sublime Text editor to view the export file.

New Export shows the binary file in the Backup SQL file

Restore (the easiest way) with Adminer

OK lets delete the tbtest database and then restore it with Adminer. I used Adminer to delete (DROP) the database.

Database dropped with Adminer

Database “dbtest” deleted.

Now lets create a blank database to restore to (same name).

Create database screen.

Database created.

dbtest created.

Now lets import the database backup using Adminer.

Click Import, select the backup file and un-tick Stop on errors.

Import screenshot, dxtest selectded, Restore file selected, stop on errors disabled

TIP: The 2MB next the the choose file button is defined by your web server and PHP configuration. If you are trying to import a larger database (e.g 80MB) first increase the limits in your web server and PHP (via php.ini).

The Import (restore should take seconds)

Import Success

The database was imported from a backup, all tables and records imported just fine.

The database was imported from a backup

Bonus methods.

On Ubuntu use this guide to backup from the command line. If you use the Oracle MySQL Workbench read this.

I hope this helps someone.

Filed Under: Adminer, Backup, Database, MySQL, Restore Tagged With: and, Backup, How, Linux, MySQL, on, restore, to, windows

Setting up the free MySQL database server on Windows 10

April 20, 2019 by Simon

This guide assumes you are a beginner using Windows 10 and maybe have a default website configured on Windows 10 using the built in Information Server (IIS) along with PHP (e.g PHP 7.3.4 or greater).

If you are have never used Internet Information Server (IIS) then XAMPP is a great way (single install) to setup a web server, database server and PHP with little fuss.

In this case I will manually install MySQL Community Server to add alongside Internet Information Server (IIS) on Windows 10.

Downloading
MySQL Server (Community Edition)

Go to here and click MySQL Installer for Windows.

Screenshot of Download installer link

Click MySQL Installer for Windows.

fyi: I sent myself on a goose chase as I could only see a 32 bit installer (I spent days trying to find a 64-bit installer or manual install binaries that were 64 bit). I should have read the bit that said “MySQL Installer is 32 bit, but will install both 32 bit and 64 bit binaries.“

MySQL Installer is 32 bit, but will install both 32 bit and 64 bit binaries.

You can read the installer documentation here if you wish.

I downloaded the larger of the two available installers (one 16MB the other 300+ MB, same version etc.). I had to login with an Oracle ID to start the download.

Download MySQL Installer screenshot

Install file downloaded

Installing MySQL Server (Community Edition)

I started the installer (accepted the licence agreement)

I accepted the licence agreement

I selected “Full Install“

I could have selected server only or custom.

I selected Full Install

I downloaded and installed Python 3.7, Thanks MySQL Installer for adding a link to the Python download.

I Installed Python 3.7

After Python was installed I clicked refresh in MySQL and now MySQL can see Python 3.7.

Now MySQL can see Python 3.7.

I had a Visual Studio plugin install error (because Visual Studio was not installed).

Visual Studio plugin install error (Visual Studio is not installed)

Full Install (all components selected) reported the items are ready for install.

Full Install (all components selected)

Installation status complete.

Installation status list (Full Install)

List of items to configure (post install)

List of items to configure.

I setup a standard MySQL Server (not a Cluster)

I setup a standard MySQL Server (not a Cluster)

I setup MySQL as a standard Development computer on port 3306 over TCP/IP (no Named Pipe, Shared Memory etc).

I setup MySQL as a standard Development computer on port 3306 over TCP/IP.

I enforced strong passwords.

I enforced strong passwords.

I set a root password and added few app usernames (with passwords).

I set a root password and a few app usernames (with passwords)

I named the MySQL Instance and set it to auto start when windows starts as a standard system account.

I named the MySQL Instance and set it to auto start when windows starts as a standard account.

Post installation in progress.

Installation in progress.

I accepted the defaults for the next screen (configuring routers). I tested the connection to MySQL Server.

Connect to MySQL server test screen.

Installation complete.

Installation complete screen.

MySQL WorkBench

I opened the MySQL Workbench software and viewed the server instance information. Nice.

MySQL Workbench Instance information

MySQL Workbench server performance graphs are nice.

MySQL Workbench performance graphs.

I am used to Adminer for managing MySQL on Linux so I install that now.

Install Adminer (formerly phpMinAdmin)

I prefer the PHP based Admirer database management tool from here.

https://adminer.net website  screenshot.

I downloaded a single PHP file and placed it in my IIS website root folder (as /adminer.php).

I tried to connect to my MySQL instance but received this error “The server requested authentication method unknown to the client”.

Unable to connect to MySQL with Adminer: Error "he server requested authentication method unknown to the client"

I googled and checked that I had a MySQL extension in php.ini (It did).

I opened MySQL Workbench and opened Manage Server Connections and located my my.ini file location (“C:\ProgramData\MySQL\MySQL Server 8.0\my.ini“). I opened my my.ini in a text editor and commented out the following line

#default_authentication_plugin=caching_sha2_password

and added

default_authentication_plugin= mysql_native_password

I saved the my.ini file, stopped and started the MySQL Service.

MySQL Workbench restarting database service UI screenshot.

I opened the MySQL Workbench and ran a query (File then “New Query Tab“) to set/reset the password.

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password
BY 'your_password_goes_here';  

Screenshot

Reset root password SQL statement "ALTER USER

I then tested logging in to MySQL with Adminer.

Adminer view showing default databases installed by MySQL.

Success, I can now created databases and tables.

Adminer create datbase table screenshot showing a table with 4 fields.

I hope this helps someone.

Filed Under: Database, MySQL, PHP Tagged With: MySQL, php, Setup

Setup a dedicated Debian subdomain (VM), Install MySQL 14 and connect to it from a WordPress on a different VM

July 21, 2018 by Simon

This is how I set up a dedicated Debian subdomain (VM), Installed MySQL 14 and connected to it from a WordPress installation on a different VM

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving fearby.com from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Fearby.com

I will be honest, fearby.com is my play server where I can code, learn about InfoSec and share (It’s also my stroke rehab blog).

There is no faster way to learn than actually doing. The problem is my “doing” usually breaks the live site from time to time (sorry).

I really need to set up a testing environment (DEV-TEST-LIVE or GREEN-BLUE) server(s). GREEN-BLUE has advantages as I can always have a hot spare ready. All I need to do is toggle DNS and I can set the GREEN or BLUE server as the live server.

But first  I need to separate my database from my current fearby.com server and setup up a new web server. Having a Green and Blue server that uses one database server will help with near real-time production website switches.

Dedicated Database Server

I read the following ( Should MySQL and Web Server share the same server? ) at Percona Database Performance Blog. Having a separate database server should not negatively impact performance (It may even help improve speeds).

Deploy a Debian VM (not Ubuntu)

I decided to set up a Debian server instead of Ubuntu (mostly because of the good focus on stability and focus on security within Debian).

I logged into the UpCloud dashboard on my mobile phone and deployed a Debian server in 5 mins.  I will be using my existing how to setup Ubuntu on UpCloud guide (even though this is Debian).

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

Deploy Debian Sevrer

Deploy a Debian server setup steps:

  1. Login to UpCloud and go to Create server.
  2. Name your Server (use a fully qualified domain name)
  3. Add a description.
  4. Choose your data centre (Chicago for me)
  5. Choose the server specs (1x CPU, 50GB Disk, 2GB Memory, 2TB Traffic for me)
  6. Name the Primary disk
  7. Choose an operating system (Debian for me)
  8. Select an SSH Key
  9. Choose misc settings
  10. Click Deploy server

After 5 mins your server should be deployed.

After Deploy

Setup DNS

Login to your DNS provider and create DNS records to the new IP’s (IPv4 and IPv6) provided by UpCloud. It took DNS 12 hours to replicate to my in Australia.

Add a DNS record with your domain registra A NAMe = IPV4 and AAAA Name = IPv6

Setup a Firewall (at UpCloud)

I would recommend you set up a firewall at UpCloud as soon as possible (don’t forget to add the recommended UpCloud DNS IP’s and any whitelisted IP’s your firewall).

Block everything and only allow

  • Port 22: Allow known IP(s) of your ISP or VPN.
  • Port 53: Allow known UpCloud DNS servers
  • Port 80 (ALL)
  • Port 443 (ALL)
  • Port 3306 Allow your WordPress site and known IP(s) of your ISP or VPN.

Read my post on setting up a whitelisted IP on an UpCloud VM… as it is a good idea.

UpCloud thankfully has a copy firewall feature that is very handy.

Copy Firewall rules option at UpCloud

After I set up the firewall I SSH’ed into my server (I use vSSH on OSX buy you could use PUTTY).

I updated the Debian system with the following  command

sudo apt update

Get the MySQL Package

Visit http://repo.mysql.com/ and get the URL of the latest apt-config repo deb file (e.g “mysql-apt-config_0.8.9-1_all.deb”). Make a temp folder.

mkdir /temp
cd /temp

Download the MySQL deb Package

wget http://repo.mysql.com/mysql-apt-config_0.8.9-1_all.deb

Install the package

sudo dpkg -i mysql-apt-config_0.8.9-1_all.deb

Update the system again

sudo apt update

Install MySQL on Debian

sudo apt install mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server psmisc
The following NEW packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server mysql-server psmisc
0 upgraded, 9 newly installed, 0 to remove and 1 not upgraded.
Need to get 37.1 MB of archives.
After this operation, 256 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-client amd64 5.7.22-1debian9 [8886 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 mysql-common all 5.8+1.0.2 [5608 B]
Get:3 http://deb.debian.org/debian stretch/main amd64 libaio1 amd64 0.3.110-3 [9412 B]
Get:4 http://deb.debian.org/debian stretch/main amd64 libatomic1 amd64 6.3.0-18+deb9u1 [8966 B]
Get:5 http://deb.debian.org/debian stretch/main amd64 psmisc amd64 22.21-2.1+b2 [123 kB]
Get:6 http://deb.debian.org/debian stretch/main amd64 libmecab2 amd64 0.996-3.1 [256 kB]
Get:7 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-client amd64 5.7.22-1debian9 [12.4 kB]
Get:8 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-server amd64 5.7.22-1debian9 [27.8 MB]
Get:9 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-server amd64 5.7.22-1debian9 [12.4 kB]
Fetched 37.1 MB in 12s (3023 kB/s)
Preconfiguring packages ...
Selecting previously unselected package mysql-common.
(Reading database ... 34750 files and directories currently installed.)
Preparing to unpack .../0-mysql-common_5.8+1.0.2_all.deb ...
Unpacking mysql-common (5.8+1.0.2) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../1-libaio1_0.3.110-3_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-3) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../2-libatomic1_6.3.0-18+deb9u1_amd64.deb ...
Unpacking libatomic1:amd64 (6.3.0-18+deb9u1) ...
Selecting previously unselected package mysql-community-client.
Preparing to unpack .../3-mysql-community-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-client (5.7.22-1debian9) ...
Selecting previously unselected package mysql-client.
Preparing to unpack .../4-mysql-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-client (5.7.22-1debian9) ...
Selecting previously unselected package psmisc.
Preparing to unpack .../5-psmisc_22.21-2.1+b2_amd64.deb ...
Unpacking psmisc (22.21-2.1+b2) ...
Selecting previously unselected package libmecab2:amd64.
Preparing to unpack .../6-libmecab2_0.996-3.1_amd64.deb ...
Unpacking libmecab2:amd64 (0.996-3.1) ...
Selecting previously unselected package mysql-community-server.
Preparing to unpack .../7-mysql-community-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-server (5.7.22-1debian9) ...
Selecting previously unselected package mysql-server.
Preparing to unpack .../8-mysql-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-server (5.7.22-1debian9) ...
Setting up libatomic1:amd64 (6.3.0-18+deb9u1) ...
Setting up psmisc (22.21-2.1+b2) ...
Setting up mysql-common (5.8+1.0.2) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libmecab2:amd64 (0.996-3.1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up libaio1:amd64 (0.3.110-3) ...
Processing triggers for systemd (232-25+deb9u4) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up mysql-community-client (5.7.22-1debian9) ...
Setting up mysql-client (5.7.22-1debian9) ...
Setting up mysql-community-server (5.7.22-1debian9) ...
update-alternatives: using /etc/mysql/mysql.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Created symlink /etc/systemd/system/multi-user.target.wants/mysql.service -> /lib/systemd/system/mysql.service.
Setting up mysql-server (5.7.22-1debian9) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for systemd (232-25+deb9u4) ...

Secure MySQL

You can secure the MySQL server deployment (set options as needed)

sudo mysql_secure_installation

Enter password for user root:
********************************************
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: No
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : No

... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : Yes
Success.

Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : No

... skipping.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Yes
- Dropping test database...
Success.

- Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Yes
Success.

All done!

Install NGINX

I installed NGINX to allow Adminer MySQL GUI to be used

I ran these commands to install NGINX.

sudo apt update
sudo apt upgrade
sudo apt-get install nginx

I edited my NGINX configuration as required.

  • Set a web server root
  • Set desired headers
  • Optimized NGINX (see past guides here, here and here)

I reloaded NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Install PHP

I followed this guide to install PHP on Debian.

sudo apt update
sudo apt upgrade

sudo apt install ca-certificates apt-transport-https
wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -
echo "deb https://packages.sury.org/php/ stretch main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update
sudo apt install php7.2
sudo apt install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml php7.2-cli php7.2-common

Install PHP FPM

apt-get install php7.2-fpm

Increase Upload Limits

You may need to temporarily increase upload limits in NGINX and PHP before you can restore a WordPress database. My feabry.com blog is about 87MB.

Add “client_max_body_size 100M;” to “/etc/nginx/nginx.conf”

Add the following to “/etc/php/7.2/fpm/php.ini”

  • post_max_size = 100M
  • upload_max_filesize = 100M

Restore a backup of my MySQL database in MySQL

You can now use Adminer to restore your blog to MySQL. Read my post here on Adminer here. I used Adminer to move my WordPress site from CPanel to a self-managed server a year ago.

First login to your source server and export your desired database then login to the target server and import the database.

Firewall Check

Don’t forget to allow your WordPress site’s 2x Public IP’s and 1x Private IP to access port 3306 in your UpCloud Firewall.

How to check open ports on your current server

sudo netstat -plunt

Set MySQL Permissions

Open MySQL

mysql --host=localhost --user=root --password=***************************************************************************

I ran these statements to grant the user logging in on the nominate IP’s access to MySQL.

mysql>
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';

Reload permissions in MySQL

FLUSH PRIVILEGES;

Allow access to the Debian machine from known IP’s

Edit “/etc/host.allow”

Additions (known safe IP’s that need access to this MySQL remotely).

mysqld : IPv4Server1PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow
mysqld : IPv4Server2PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow

mysqld : ALL : deny

Tell MySQL to listen on

Edit “/etc/mysql/my.cnf”

Added..

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
language = /usr/share/mysql/English
bind-address = DebianServersIntenalIPv4Address

I guess you could change the port to something random???

Restart MySQL

sudo service mysql restart

Install a second local firewall on Debian

Install ufw

sudo apt-get instal ufw

Do add the IP of your desired server or VPN to access SSH

sudo ufw allow from 123.123.123.123 to any port 22

Do add the IP of your desired server or VPN to access WWW

sudo ufw allow from 123.123.123.123 to any port 80

Now add the known IP’s (e.g any web servers public (IPv4/IPv6) or Private IP’s) that you wish to grant access to MySQL (e.g the source website that used to have MySQL)

sudo ufw allow from 123.123.123.123 to any port 3306

Do add UpCloud DNS Servers to your firewall

sudo ufw allow from 94.237.127.9 to any port 53
sudo ufw allow from 94.237.40.9 to any port 53
sudo ufw allow from 2a04:3544:53::1 to any port 53
sudo ufw allow from 2a04:3540:53::1 to any port 53

Add all other rules as needed (if you stuff up and lock your self out you can login to the server with the Console on UpCloud)

Restart the ufw firewall

sudo ufw disable
sudo ufw enable

Prevent MySQL starting on the source server

Now we can shut down MySQL on the source server (leave it there just in case).

Edit “/etc/init/mysql.conf”

Comment out the line that contains “start on ” and save the file

and run

sudo systemctl disable mysql

Reboot

shutdown -r now

Stop and Disable NGINX on the new DB server

We don’t need NGINX running now the database has been imported with Adminer.

Stop and prevent NGINX from starting up on startup.

/etc/init.d/nginx stop
sudo update-rc.d -f nginx disable
sudo systemctl disable nginx

Check to see if MySQL is Disabled

service mysql status
* mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; disabled; vendor preset: enabled)
Active: inactive (dead)

Yep

Test access to the database server in PHP code

Add to dbtest.php

<em>SELECT guid FROM wp_posts</em>()<br />
<ul><?php

//External IP (charged after quota hit)
//$servername = 'db.yourserver.com';

//Private IP (free)
//$servername = '10.x.x.x';

$username = 'username';
$password = '********your*password*********';
$dbname = 'database';

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}

$sql = 'SELECT guid FROM wp_posts';
$result = $conn->query($sql);

if ($result->num_rows > 0) {
    // output data of each row
    while($row = $result->fetch_assoc()) {
        echo $row["guid"] . "<br>";
    }
} else {
    echo "0 results";
}
$conn->close();
?></ul>
Done

Check for open ports.

You can install nmap on another server and scan for open ports

Install nmap

sudo apt-get install nmap

Scan a server for open ports with nmap

You should see this on a server that has access to see port 3306 (port 3306 should not be visible by non-whitelisted IP’s).  Port 3shouldoudl not be seen via everyone.

sudo nmap -PN db.yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:15 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
3306/tcp open  mysql

You should see something like this on a server that has access to see port 80/443 (a web server)

sudo nmap -PN yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:18 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
80/tcp   open  http
443/tcp   open  https

I’d recommend you use a service like https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap# to check for open ports.  https://hackertarget.com/tcp-port-scan/ is a great tool too.

https://www.infobyip.com/tcpportchecker.php is also a free port checker that you can use to verify individual closed ports.

Screeshot of https://www.infobyip.com/tcpportchecker.php

Hardening MySQL and Debian

Read: https://www.debian.org/doc/manuals/securing-debian-howto/ap-checklist.en.html

Configuring WordPress use the dedicated Debian VM

On the source server that used to have MySQL edit your wp-config.php file for WordPress.

Remove

define('DB_HOST', 'localhost');

add (read the update below, I changed the DNS IP to the Private IP to have free traffic)

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

Restart NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Restart PHP-FPM

service php7.2-fpm restart

Conclusion

Nice, I seem to have shaved off 0.3 seconds in load times (25% improvement)

1sec gtmtrix load time

Update: Using a Private IP or Public IP between WordPress and MySQL servers

After I released this blog post (version 1.0 with no help from UpCloud) UpCloud contacted me and said the following.

Hello Simon,

I notice there's no mention of using the private network IPs. Did you know that we automagically assign you one when you deploy with our templates. The private network works out of the box without additional configuration, you can use that communicate between your own cloud servers and even across datacentres.

There's no bandwidth charge when communicating over private network, they do not go through public internet as well. With this, you can easily build high redundant setups.

Let me know if you have any other questions.

--
Kelvin from UpCloud

I will have updated my references in this post and replace the public IP address (that is linked to DNS record for db.fearby.com) and instead use the private ip address (e.g 10.x.x.x), your servers private IP address is listed against the public IPv$ and IPv6 address.

I checked that the local ufw firewall did indeed allow the private IP access to MySQL.

sudo ufw status numbered |grep 10.x.x.x
[27] 3306                       ALLOW IN    10.x.x.x

On my new Debian MySQL server, I edited the file /etc/mysql/my.cnf and changed the IP to the private IP and not the public IP.

Now it looked like

[mysqld]
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
language        = /usr/share/mysql/English
bind-address    = 10.x.x.x

(10.x.x.x  my Debian servers private IP)

On my WordPress instance, I edited the file  /www-root/wp-config.php

I added the new private host

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

(10.x.x.x  my Debian servers private IP)

Alos on Debian/MySQL ensure you have granted access to the private IP of the WordPress server

Edit /etc/host.allow

Add

mysqld : 10.x.x.x : allow

Restart MySQL

sudo systemctl restart mysql

TIP: Enable UpCloud Backups

Do setup automatic backups (and or take manual backups). Backups are an extra charge but are essential IMHO.

UpCloud backups

Troubleshooting

If you can’t access MySQL log back into MySQL

mysql --host=localhost --user=root --password=***************************************************************************

and run

GRANT ALL PRIVILEGES ON *.* TO [email protected]'%' IDENTIFIED BY '***********sql*user*password*************''; FLUSH PRIVILEGES;

Reboot

Lower Upload Limits

Don’t forget to lower file upload sizes in NGINX and PHP (e.g 2M) now that the database has been restored.

I hope this guide helps someone.

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.6 Changed Public IP use to private IP use to ensure we are not charged when the serves sage goes over the quota

v1.5 Fixed 03 type (should have been 0.3)

v1.4 added disable nginx info

v1.3 added https://www.infobyip.com/tcpportchecker.php

v1.1 added https://hackertarget.com/tcp-port-scan/

v1.0 Initial Post

Filed Under: Debian, MySQL, VM, Wordpress Tagged With: 14, a, and, Connect, debian, dedicated, different, from, install, MySQL, Setup, Subdomain, to, vm, wordpress

How to backup WordPress on a host that has CPanel

April 29, 2018 by Simon

This is a simple guide that demonstrates how you can log in to a host that offers the CPanel tools to backup all of your website files (and databases). Backing up your website should be done often and especially before you migrate to any another website host. I used to change hosts every few years (they don’t own your site, you do).

I have a number of guides on moving away from CPanel, setting up VM’s on UpCloud, AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line copying files to a server via command line editing remote files locally etc but how do you manage a website with CPanel?

You can normal login to CPanel tools on a shared host by loading www.yourdomainnam.com/cpanel (failing that login to your domain hosts web GUI and find your CPanel interface there).

Step 1: Login to your Host

Login to your web host

Login to Host

Step 2: Find your CPanel Interface

Hosts are a bit different but in this case, I just click my domain to find the CPanel link.

Domain

I found it, I clicked the CPanel login.

Cpanel link

Step 3: CPanel Applications

CPanel does offer good tools to manage your websites like web-based File Manager and Database tool called phpMyAdmin.

CPanel Options

Aside: CPanel/Hosts Downsides

The thing I don’t like about hosts that offer CPanel is they usually limit delivery of your website to extract more money. Nothing worse than receiving Resource Limit Is Reached errors.

cpenal_usage_exceeded

Also shared hosts usually lag way behind in newer software versions like PHP and MySQL (this is a security concern).

TIP: You can scan your site for vulnerabilities using Qualsys Freescan, Zap or Kali Linux.

Here is a security scan of a shared host (with CPanel) that I was using in 1999. Note the high vulnerabilities and old version of Linux.

Static Website

Also, a shared host will often overcharge you (e.g $150 a year) for a poorly configured SSL certificate.

This was an SSL cert I paid $150 a year for (evaluated with SSL Labs SSL Test) on a shared host with CPanel.

Aside: Self Managed Upsides

After I moved my domain to a self-managed virtual machine I migrated WordPress, set up a free SSL certificate, sped up my site with a CDN, setup Cloudflare, setup better TLS security etc

When you manage your own server you can install a free SSL certificate in under 1 minute.

Below is my SSL certificate. A strong SSL certificate will increase search engine traffic

SSL Cert

Aside: Compare Shared host speed v Self Managed

FYI: https://gtmetrix.com/ is a great site for measuring the speed of a website (shared of self-managed). I found great speed improvements after moving away from a host offering CPanel, tweaking the server and setting up cloudflare. A self-managed server will allow you to tweak anything you want.

GTMetrix results:

Shared Host Speed

I like how self-managed servers allow you to scale the server’s resources yourself, move servers or add storage etc.

Aside: SSL Certificate

If you have an SSL cert you should test it often as vulnerabilities pop up from time to time.

FYI: All sites will soon require an SSL certificate to be sent traffic from search engines (no SSL = lower traffic).

SSL Test my site: https://dev.ssllabs.com/ssltest/analyze.html?d=fearby.com&s=104.27.154.69

Now enough with the self-managed serve asides and back to how to backup your website with CPanel tools.

Step 4: Backup your web files in CPanel

Use the File Explorer app in CPanel

File Manager

Highlight all files that you want to backup (highlight everything but not past backup files).

backup-cpanel-006

View the files to compress summary

Compress Summary

Click Compress Files(s) and view the backup progress

Backup Progress

You can now download the backup zip file in your browser (click the file and click Download).

Downaload ZIp

Download Progress.

Download File

Step 5: Backup your database in CPanel

Now we need to backup any MySQL database(s) that may be used by WordPress

Open the phpMyAdmin app in CPanel.

phpMyAdmin

FYI: Alternatively, you can use a free tool called Adminer to backup and restore our database.

Click your WordPress database (on the left). You can identify your current WordPress database by opening the wp-config.php file.

Database Options

The first step is to perform an online cold backup of the WordPress database.

  1. Click the database you want to backup (on the left)
  2. Click the Options tab
  3. Under the Copy To frame enter a new database name to copy the database too
  4. Click Go

Online Backup

Now you have an online cold spare that you can use just in case the original database corrupts itself. You can rename the database or configure WordPress to point to this new database if need be.

backup-cpanel-013

Now let’s download a copy of the database (Repeat for multiple databases).

  1. Click the database you want to backup
  2. Click the Export tab
  3. Click Go

backup-cpanel-014

You should now have a backup of your website in a zip file and an export of your database in a .sql text file, SQL files can be re-imported to databases later.

TIP: Backup often.

Export Done

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.0 Initial post

Filed Under: CPanel Tagged With: a, Adminer, Backup, cpanel, has, host, How, MySQL, on, that, to, wordpress, zip

Using the free Adminer GUI for MySQL on your website

February 8, 2018 by Simon

Adminer is a free GUI tool that can you can easily install on a PHP web server. Adminer allows you to easily connect to your MySQL instance, create databases/tables/indexes/rows and backup/import databases and much more.

You can read my other posts on Useful Linux Terminal Commands and Useful OSX Terminal Commands.

I used to use phpMyAdmin to manage MySQL databases on AWS, Digital Ocean and Vultr but switched to Adminer due to forgotten issues.  You can always manage MySQL via command line but that is quite boring.

The below screenshots were taken on my local Development Mac Laptop (with optional OSX Apache SSL Setup (that reports “Not Secure” (but it is good enough to use locally)). I prefer to code in SSL and warn when SSL is not detected.

Downloading and Installing Adminer

Navigate to https://www.adminer.org/ and click Download.

Adminer GUIClick English only (.php file)

Adminer

Save the Adminder for MySQL (.php) file to your web server and give it a random name and put in a folder also with a random name (I use https://www.grc.com/passwords.htm to generate strong password).

Tip: Uploading this file to a live serve offers hackers and unauthorized people potential access to your MySQL server.  I would remove this file from live serves when you are not using it not to be sure.

Tip: Read my guide here on setting up NGINX, MySQL and PHP here.  Basically, I did this to setup MySQL on Ubuntu 16.04.

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Server

TIP: Ensure MySQL is secure and has a good root password, also consider setting up Ubuntu Firewalls and Securing Ubuntu. Also, ensure the Server is patched and does not have exploits like Spectre and meltdown.

Now you can access your Admirer php file on your Web Server (hopefully with an obfuscated name).

Randomize

Login to Adminer with your MySQL root password.

Login

Click Create databaase

Create Database

Give the database a name and choose the character coding standard (e.g UTF8 general ci). Different standards have different performance impacts too.

Save

Now that you have a database you can create a table.

Adminer

Consider adding an auto-incrementing ID and say a Key and Value varchar column.

Adminer

When the table is created you can add a row to the table.

Adminer

I created one with a “TestKey” and “TestValue” row.

Adminer

The row was inserted.

Adminer

The final thing to do is add a database user that code can connect to the database with. Click Privileges.

Adminer

Click Create user

Adminer

Tick All privileges and click Save

Adminer

Now the user is added to the database

Adminer

Let’s create a PHP file and talk to the database. Let’s use parameterized queries

<?php

date_default_timezone_set('Australia/Sydney');
echo "Last modified: " . date ("F d Y H:i:s.", getlastmod()) . "<br /><br />";

// Turn on if you need to see errors
// error_reporting(E_ALL);
// ini_set('display_errors', 0);

$dbhost = '127.0.0.1';
$dbname = 'dbtest';
$dbusername = 'dbtestuser';
$dbpassword = '*****************************************'';

$con = mysqli_connect($dbhost, $dbusername, $dbpassword, $dbname);
 
// Turn on debug stuff if you need it
// echo var_dump($con);
// printf(" - Error: %s.n", $stmt->error);
 
if($con->connect_errno > 0){

    printf(" - Error: %s.n", $stmt->error);
    die("Error: Unable to connect to MySQL");

} else {

    echo "Charset set to utf8<br />";
    mysqli_set_charset($con,"utf8");
}
 
if (!$con) {

    echo "Error: Unable to connect to MySQL (E002)" . PHP_EOL;
    echo "Debugging errno: " . mysqli_connect_errno() . PHP_EOL;
    echo "Debugging error: " . mysqli_connect_error() . PHP_EOL;
    exit;

} else {

    echo "Database Connection OK<br />";
 
    echo "&nbsp; Success: A proper connection to MySQL was made! The $dbname database is great." . PHP_EOL . "<br />";
    echo "&nbsp; &nbsp;- Host information: " . mysqli_get_host_info($con) . PHP_EOL . "<br />";
    echo "&nbsp; &nbsp;- Server Info: '" . mysqli_get_server_info($con) . "'<br />";
    echo "&nbsp; &nbsp;- Server Protocol Info : ". mysqli_get_proto_info($con) . "<br />";
    echo "&nbsp; &nbsp;- Server Version: " . mysqli_get_server_version($con) . "<br />";
    //echo " - Server Connection Stats: " . print_r(vmysqli_get_connection_stats($con)) . "<br />";
    echo "&nbsp; &nbsp;- Client Version: " . mysqli_get_client_version($con) . "<br />";
    echo "&nbsp; &nbsp;- Client Info: '" . mysqli_get_client_info() . "'<br />";
 
    echo "Ready to Query the database '$dbname'.<br />";
 
    // Input Var's that are parameterized/bound into the query statement
    $in_key = mysqli_real_escape_string($con, 'TestKey');
 
    // Output Var's that the query fills after querying the database
    // These variables will be filled with data from the current returned row
    $out_id = "";
    $out_key = "";
    $out_value = "";
 
    echo "1. About to query the database: '$dbname'<br />";
    $stmt = mysqli_stmt_init($con);

    $sql = "SELECT testid, testkey, testvalue FROM tbtest WHERE testkey = ?";
    echo "SQL: $sql (In = $in_key)<br /";

    if (mysqli_stmt_prepare($stmt, $sql)) {

            echo "2. Query Returned<br />";
            /*
                Type specification chars
                Character   Description
                i   corresponding variable has type integer
                d   corresponding variable has type double
                s   corresponding variable has type string
                b   corresponding variable is a blob and will be sent in packets
            */
            mysqli_stmt_bind_param($stmt, 's', $in_key);
            mysqli_stmt_execute($stmt);
            mysqli_stmt_bind_result($stmt, $out_id, $out_key, $out_value);
            mysqli_stmt_fetch($stmt);
     
            // Do something with the 1st returned row        
            echo " - Row: ID: $out_id, KEY: $out_key, VAL: $out_value <br />";//

            // Do we have more rows to process
            while($stmt->fetch()) { 
                
                    // Output returned values
                    echo " - Row: ID: $out_id, KEY: $out_key, VAL: $out_value <br />";//
            
            }
            mysqli_stmt_close($stmt);
            
            echo "Done<br />";
        
        } else {
        
            echo "3. Error Querying<br/>";
            printf(" - Error: %s.n", $stmt->error);
        
        }
}    
?>

Result

Adminer Results

If you don’t have a server check out my guides on AWS, Digital Ocean and Vultr.

Happy coding and I hope this helps someone.

Donate and make this blog better

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.0 Initial Version

Filed Under: MySQLGUI Tagged With: Adminer, for, free, gui, MySQL, on, the, Using, website, your

PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database

November 1, 2017 by Simon

Sanitising user input is a golden rule with web developing (see https://www.owasp.org/index.php/Input_Validation_Cheat_Sheet ), here is my code snippet to sanitise and parameterise MySQL queries in PHP 7.0.

First, watch and follow @jawache (Asim Hussein) demo how common hacks happen and why you should update and patch software often, sanitise user data and set up a firewall.

I have blogged before on how to set up a Vultr server and configure it, How to secure Ubuntu in the Cloud, How to run an Ubuntu System Audit and Beyond SSL with Content Security Policy, Public Key Pinning etc but a 100% secure server is impossible as zero-day exploits and flaws (e.g WPA WiFi) remind us how limited technology lifespans can be. Yes, you can setup firewalls on Ubuntu and WordPress but you are only one exploit away from being hacked. Below is my code snippet (in PHP) to sanitise incoming data, query a MySQL database with object-oriented calls in PHP 7.0 and return data in variables. I have set up a firewall to block access to MySQL and non-essential ports (use https://www.shodan.io/ to test your server’s ports). I was using older deprecated PHP 5 era database calls and I researched newer calls available in PHP 7.0.

When you log in to an Ubuntu server and it says the following you should update

89 packages can be updated.
35 updates are security updates.

Also update software, node, npm etc

This code outputs too much information but will help you setup and get data on your servers (as long as you replace your database, table and field names).

<?php
echo "Last modified: " . date ("F d Y H:i:s.", getlastmod()) . "<br /><br />";
date_default_timezone_set('Australia/Sydney');

$dbhost = '127.0.0.1';
$dbusername = 'themysqlaccount';
$dbpassword = 'themysqlpassword';
$dbname = 'thedatabasename';
$con = mysqli_connect($dbhost, $dbusername, $dbpassword, $dbname);

//Debug stuff
//echo var_dump($con);
//printf(" - Error: %s.\n", $stmt->error);

if($con->connect_errno > 0){
    printf(" - Error: %s.\n", $stmt->error);
    die("Error: Unable to connect to MySQL (E001)");
} else {
 echo "Charset set to utf8<br />";
 mysqli_set_charset($con,"utf8");
}

if (!$con) {
    echo "Error: Unable to connect to MySQL (E002)" . PHP_EOL;
    echo "Debugging errno: " . mysqli_connect_errno() . PHP_EOL;
    echo "Debugging error: " . mysqli_connect_error() . PHP_EOL;
    exit;
} else {
    echo "Database Connection OK<br />";

    echo "&nbsp; Success: A proper connection to MySQL was made! The my_db database is great." . PHP_EOL . "<br />";
    echo "&nbsp; &nbsp;- Host information: " . mysqli_get_host_info($con) . PHP_EOL . "<br />";
    echo "&nbsp; &nbsp;- Server Info: '" . mysqli_get_server_info($con) . "'<br />";
    echo "&nbsp; &nbsp;- Server Protocol Info : ". mysqli_get_proto_info($con) . "<br />";
    echo "&nbsp; &nbsp;- Server Version: " . mysqli_get_server_version($con) . "<br />";
    //echo " - Server Connection Stats: " . print_r(mysqli_get_connection_stats($con)) . "<br />";
    echo "&nbsp; &nbsp;- Client Version: " . mysqli_get_client_version($con) . "<br />";
    echo "&nbsp; &nbsp;- Client Info: '" . mysqli_get_client_info() . "'<br />";

echo "Ready to Query the database '$dbname'.<br />";

// Input Var's that are parameterized/bound into the query statement
// I pre fill three vaiables with known fields in my users table
// Goot article in manual sanitization of strings in PHP http://php.net/manual/en/filter.filters.sanitize.php
    $in_username = mysqli_real_escape_string($con, 'FearTec');
    $in_f_guid = mysqli_real_escape_string($con, '5161a571-4a51-468d-9e96-6a5db5d35b1c');
    $in_mobile = mysqli_real_escape_string($con,'0456629533');

// Output Var's that the query fills after querying the database
// These variables will be filled with data from the current returned row
    $out_id = "";
    $out_f_guid = "";
    $out_username = "";
    $out_mobile = "";

echo "1. About to query the database: '$dbname'<br />";
$stmt = mysqli_stmt_init($con);
if (mysqli_stmt_prepare($stmt,"SELECT id, username, guid, user_mobile FROM thedatabasename WHERE username = ? AND guid = ? AND user_mobile = ?")) {

        echo "2. Query Returned<br />";

        /*
            Type specification chars
            Character   Description
            i   corresponding variable has type integer
            d   corresponding variable has type double
            s   corresponding variable has type string
            b   corresponding variable is a blob and will be sent in packets
        */
        mysqli_stmt_bind_param($stmt, 'sss', $in_username, $in_guid, $in_mobile);

        mysqli_stmt_execute($stmt);

        mysqli_stmt_bind_result($stmt, $out_id, $out_username, $out_guid, $out_mobile);

        mysqli_stmt_fetch($stmt);

        // Do something with the 1st returned row        
        echo " - Row: ID: $out_id, GUID: $out_guid, USR: $out_username, MOB: $out_mobile";//
        // Do we have more rows to process
        while($stmt->fetch()) { 
                // Deal with other rows
                echo " - Row: ID: $out_id, GUID: $out_f_guid, USR: $out_username, MOB: $out_mobile<br />";
        }
        mysqli_stmt_close($stmt);
        
        echo "c<br />";
    }
     else {
        echo "3. Error Querying<br/>";
        printf(" - Error: %s.\n", $stmt->error);
    }

}    

?>

Returned Data

Last modified: November 01 2017 16:43:01.
Charset set to utf8
Database Connection OK
  Success: A proper connection to MySQL was made! The my_db database is great. 
   - Host information: 127.0.0.1 via TCP/IP 
   - Server Info: '5.7.19-0ubuntu0.16.04.1'
   - Server Protocol Info : 10
   - Server Version: 50719
   - Client Version: 50012
   - Client Info: 'mysqlnd 5.0.12-dev - 20150407 - $Id: b5########################

Variable Bind Parameter Types

When you bind an incoming variable you can inform MySQL what the data type is expected to be.

mysqli_stmt_bind_param: Type specification chars
Character   Description
i   corresponding variable has type integer
d   corresponding variable has type double
s   corresponding variable has type string
b   corresponding variable is a blob and will be sent in packets

Debug Options

Errors Enabled: Turn on PHP Debug Errors On

Turning on errors on production servers is bad but on development.

First, find php.ini files

locate php.ini
/php.ini.bak
/.c9/metadata/workspace/etc/php/7.0/fpm/php.ini
/etc/php/7.0/apache2/php.ini
/etc/php/7.0/cli/php.ini
/etc/php/7.0/fpm/php.ini
/etc/php/7.0/phpdbg/php.ini
/usr/lib/php/7.0/php.ini-development
/usr/lib/php/7.0/php.ini-production
/usr/lib/php/7.0/php.ini-production.cli

Edit your appropriate PHP file

sudo nano /etc/php/7.0/fpm/php.ini

And turn on Error reporting.

Restart PHP and NGINX

sudo /etc/init.d/php7.0-fpm restart
sudo /etc/init.d/nginx restart

If you need to view your active php.ini file or see PHP configuration settings, add this to a .php file on your web server and view its contents.

<?php
phpinfo()
?>

It is amazing how clear errors can be

PHP Error

Dump Connection Vars: PHP mysqli_connect: var_dump($con)

echo var_dump($con);

Output:

public 'affected_rows' => int 0
  public 'client_info' => string 'mysqlnd 5.0.12-dev - 20150407 - $Id: b5###############################3

Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.

echo "&nbsp; Success: Database connection OK." . PHP_EOL . "<br />";
echo "&nbsp; &nbsp;- Host information: " . mysqli_get_host_info($con) . PHP_EOL . "<br />";
echo "&nbsp; &nbsp;- Server Info: '" . mysqli_get_server_info($con) . "'<br />";
echo "&nbsp; &nbsp;- Server Protocol Info : ". mysqli_get_proto_info($con) . "<br />";
echo "&nbsp; &nbsp;- Server Version: " . mysqli_get_server_version($con) . "<br />";
//echo " - Server Connection Stats: " . print_r(mysqli_get_connection_stats($con)) . "<br />";
echo "&nbsp; &nbsp;- Client Version: " . mysqli_get_client_version($con) . "<br />";
echo "&nbsp; &nbsp;- Client Info: '" . mysqli_get_client_info() . "'<br />";

Output:

Success: Database connection OK. 
   - Host information: 127.0.0.1 via TCP/IP 
   - Server Info: '5.7.19-0ubuntu0.16.04.1'
   - Server Protocol Info : 10
   - Server Version: 50719
   - Client Version: 50012
   - Client Info: 'mysqlnd 5.0.12-dev - 20150407 - $Id: b5########################################

Show errors in failed if  statements: mysqli_stmt_prepare else

printf(" - Error: %s.\n", $stmt->error);

Output:

Error: Table 'thedatabasename.invalidtablename' doesn't exist.

Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

Ready to Query the database 'thedatabasename'. 1. About to query the database: 'thedatabasename' 2. Query Returned - Row: ID: 1, GUID: 0000000-0000-0000-0000-000000000001, USR: Bob, MOB: 1234567890 - Row: ID: 2, GUID: 0000000-0000-0000-0000-000000000002, USR: Joe, MOB: 1234567891 - Row: ID: 3, GUID: 0000000-0000-0000-0000-000000000003, USR: Jane, MOB: 1234567892

Variable Bind Parameter Types

When you bind an incoming variable you can inform MySQL what the data type is expected to be.


Debug Options

Errors Enabled: Turn on PHP Debug Errors On

Turning on errors on production servers is bad but on on development.

First, find php.ini files


Edit your appropriate PHP file


And turn on Error reporting.

Restart PHP and NGINX


If you need to view your active php.ini file or see php configuration settings, add this to a .php file on your web server and view it’s contents.


It is amazing how clear errors can be

PHP Error

Dump Connection Vars: PHP mysqli_connect: var_dump($con)


Output:


Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.


Output:


Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

(length=79) public 'client_version' => int 50012 public 'connect_errno' => int 0 public 'connect_error' => null public 'errno' => int 0 public 'error' => string '' (length=0) public 'error_list' => array (size=0) empty public 'field_count' => int 0 public 'host_info' => string '127.0.0.1 via TCP/IP' (length=20) public 'info' => null public 'insert_id' => int 0 public 'server_info' => string '5.7.19-0ubuntu0.16.04.1' (length=23) public 'server_version' => int 50719 public 'stat' => string 'Uptime: 1828089 Threads: 1 Questions: 15702 Slow queries: 0 Opens: 1529 Flush tables: 1 Open tables: 1461 Queries per second avg: 0.008' (length=142) public 'sqlstate' => string '00000' (length=5) public 'protocol_version' => int 10 public 'thread_id' => int 7367 public 'warning_count' => int 0

Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.


Output:


Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

Ready to Query the database 'thedatabasename'. 1. About to query the database: 'thedatabasename' 2. Query Returned - Row: ID: 1, GUID: 0000000-0000-0000-0000-000000000001, USR: Bob, MOB: 1234567890 - Row: ID: 2, GUID: 0000000-0000-0000-0000-000000000002, USR: Joe, MOB: 1234567891 - Row: ID: 3, GUID: 0000000-0000-0000-0000-000000000003, USR: Jane, MOB: 1234567892

Variable Bind Parameter Types

When you bind an incoming variable you can inform MySQL what the data type is expected to be.


Debug Options

Errors Enabled: Turn on PHP Debug Errors On

Turning on errors on production servers is bad but on on development.

First, find php.ini files


Edit your appropriate PHP file


And turn on Error reporting.

Restart PHP and NGINX


If you need to view your active php.ini file or see php configuration settings, add this to a .php file on your web server and view it’s contents.


It is amazing how clear errors can be

PHP Error

Dump Connection Vars: PHP mysqli_connect: var_dump($con)


Output:


Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.


Output:


Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

Ready to Query the database ‘thedatabasename’. 1. About to query the database: ‘thedatabasename’ 2. Query Returned – Row: ID: 1, GUID: 0000000-0000-0000-0000-000000000001, USR: Bob, MOB: 1234567890 – Row: ID: 2, GUID: 0000000-0000-0000-0000-000000000002, USR: Joe, MOB: 1234567891 – Row: ID: 3, GUID: 0000000-0000-0000-0000-000000000003, USR: Jane, MOB: 1234567892

Variable Bind Parameter Types

When you bind an incoming variable you can inform MySQL what the data type is expected to be.


Debug Options

Errors Enabled: Turn on PHP Debug Errors On

Turning on errors on production servers is bad but on on development.

First, find php.ini files


Edit your appropriate PHP file


And turn on Error reporting.

Restart PHP and NGINX


If you need to view your active php.ini file or see php configuration settings, add this to a .php file on your web server and view it’s contents.


It is amazing how clear errors can be

PHP Error

Dump Connection Vars: PHP mysqli_connect: var_dump($con)


Output:


Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.


Output:


Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

(length=79) public ‘client_version’ => int 50012 public ‘connect_errno’ => int 0 public ‘connect_error’ => null public ‘errno’ => int 0 public ‘error’ => string ” (length=0) public ‘error_list’ => array (size=0) empty public ‘field_count’ => int 0 public ‘host_info’ => string ‘127.0.0.1 via TCP/IP’ (length=20) public ‘info’ => null public ‘insert_id’ => int 0 public ‘server_info’ => string ‘5.7.19-0ubuntu0.16.04.1’ (length=23) public ‘server_version’ => int 50719 public ‘stat’ => string ‘Uptime: 1828089 Threads: 1 Questions: 15702 Slow queries: 0 Opens: 1529 Flush tables: 1 Open tables: 1461 Queries per second avg: 0.008’ (length=142) public ‘sqlstate’ => string ‘00000’ (length=5) public ‘protocol_version’ => int 10 public ‘thread_id’ => int 7367 public ‘warning_count’ => int 0

Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.


Output:


Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

Ready to Query the database ‘thedatabasename’. 1. About to query the database: ‘thedatabasename’ 2. Query Returned – Row: ID: 1, GUID: 0000000-0000-0000-0000-000000000001, USR: Bob, MOB: 1234567890 – Row: ID: 2, GUID: 0000000-0000-0000-0000-000000000002, USR: Joe, MOB: 1234567891 – Row: ID: 3, GUID: 0000000-0000-0000-0000-000000000003, USR: Jane, MOB: 1234567892

Variable Bind Parameter Types

When you bind an incoming variable you can inform MySQL what the data type is expected to be.


Debug Options

Errors Enabled: Turn on PHP Debug Errors On

Turning on errors on production servers is bad but on on development.

First, find php.ini files


Edit your appropriate PHP file


And turn on Error reporting.

Restart PHP and NGINX


If you need to view your active php.ini file or see php configuration settings, add this to a .php file on your web server and view it’s contents.


It is amazing how clear errors can be

PHP Error

Dump Connection Vars: PHP mysqli_connect: var_dump($con)


Output:


Show Environment Vars: mysqli_get_host_info, mysqli_get_proto_info, mysqli_get_server_version, mysqli_get_client_version and mysqli_get_client_info.


Output:


Show errors in failed if  statements: mysqli_stmt_prepare else


Output:


Debugging is your friend.

More to come..

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.0 initial post

Short (Article):

Filed Under: Code, PHP, Security, Tech Advice, Ubuntu Tagged With: a, and, bound, code, data, database, MySQL, object, oriented, parameters, PHP 7, sanitise, send, to

NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)

August 7, 2017 by Simon

The below code is part of what I use to handle logins in an app. You can use this as a base for a NodeJS login routine (in a file called app.js). I was going to use StormPath API but was too slow and unreliable for  (as it turns out StormPath are merging with Okta and are closing down services).

This code is not functionally complete but you can hack out what you need. Rolling your own solution can be faster and slow you to fully embed in server side logic but make sure it is secure.
I am using these node modules (app.js).

var app = require('express')();
var http = require('http').Server(app);
var mysql = require('mysql');
var uuid = require('node-uuid');
var Type = require('type-of-is');
var jsesc = require('jsesc');
var bcrypt = require('bcrypt');
var filessystem = require('fs');

The following code is located in myutils.js is a  handy place to place your common code.  You don’t need it but it is demonstrated here.

module.exports = function(){ 
this.sum = function(a,b){
    return a+b
};
this.multiply = function(a,b){
    return a*b
    
  };
this.samplefunction = function(a){
    return 'Hello World'
    
  };
}

Don’t forget to include it myutils.js in app.js

require('./myutils.js')();  //Remove this if you dont use it

Declare the body parser module (app.js).

app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());

Set the MySQL connection details and you can setup the number of ready MySQL connections in the pool here (app.js).

console.log(' mysql_pool.createPool()');
var mysql_pool  = mysql.createPool({
  connectionLimit : 1000,
  host            : 'localhost',
  database        : 'thedatabasename',
  user            : 'thedatabaseuser',
  password        : 'thedatabasepassword'
  
});

Create a static API endpoint (app.js).

// Generic static API Point
app.get('/api/myappname/v1/hello/world/',function(req,res){
	var data = { "Version":"" };
	data["Version"] = "Hello World"; // Add a JSON value
	res.json(data); // Return the JSON
});

Create MySQL functions (app.js).

mysql_pool.getConnection(function(err, connection){
	// See if a connection is available in the pool by querying the database
	console.log(' mysql_pool.getConnection()');
	  	if (err) {
	  		console.log(' error: ' + err);
	  		throw err;
	  	}
	connection.query('select * from thedatabasename;',  function(err2, rows){
	  	if (err2) {
	  		connection.release();
	  		console.log(' error: ' + err2);
	  		throw err2;
	  	} else {
	  		console.log( rows );
	  		console.log('Ready');
	  	}
	  	connection.release();
	});
	console.log(' mysql_pool.release()');
});

mysql_pool.on('connection', function (connection) {
  connection.query('SET SESSION auto_increment_increment=1')
  console.log(' mysql_pool.on(connection) - Set session increment');
});

mysql_pool.on('enqueue', function () {
  console.log(' mysql_pool.on(enqueue). Waiting for available connection slot.');
});

/api/myappname/v1/login function (app.js).

// Login ----------------------------------------------------------------------- 
app.post('/api/myappname/v1/login',function(req,res){
	console.log('API CALL: /api/myappname/v1/login');

	var errData = "";

	// Check to see if the users posted token is valid
	var useraccesstoken = req.headers['x-user-access-token'];
	if (appaccesstoken == undefined) {
		// If a user token is missng then block the login, we only process logins whre valid user tokens are posted. Users are generated a token at acount creation
		console.log(" 498 token required");
		errData = {"Status": "498", "Error": "Invalid user token. Please login again (error 498)."};
		res.statusCode = 499;
		res.send(errData);
		res.end();
	}

	// I usually send an x-access-token in the posted JSON payload to eliminate older apps.
	var appaccesstoken = req.headers['x-access-token'];
	if (appaccesstoken == undefined) {
		// If an app  token is missng then block the login, we only process logins whre valid app tokens are posted.
		console.log(" 499 token required");
		errData = {"Status": "499", "Error": "Invalid app token. Please update your app (error 499)."};
		res.statusCode = 499;
		res.send(errData);
		res.end();

	} else {
		// ok we have a token posted
		var appaccesstokenpassed = false;

		// Check the API Key against known API Keys
		if (appaccesstoken == "appname33ffda6d-4303-4c65-81ca-0ccf0f172a37") { appaccesstokenpassed = true;} // Debug API Key for ICS 0.9.1
		
		// OK Process the posted data
		if (appaccesstokenpassed == true) {
			
			var userenteredemail = req.body.email;
			var userenteredpass = req.body.password; //todo: Add Encryption/Decryption (currenty a BCrypt Hash is sent from the app)

			// If this is Yes a new session token is sent to the user later, this will in valitae older logged in sessions.
			var resetsessions = req.body.resetsessions;
			if (req.body.resetsessions == undefined) {
				resetsessions = "Yes";
			}
			
			// Log for debugging
			console.log(' Email: ' + userenteredemail);
			console.log(' Password: ' + pass);
			console.log(' Reset Sessions: ' + resetsessions);
			
			var data = {}
			
			mysql_pool.getConnection(function(err, connection) {
			if (err) {
				console.log(' mysql_pool.release()');
				connection.release();
		  		console.log(' Error getting mysql_pool connection: ' + err);
		  		throw err;
		  	}
		  		// Revised SQL
		  		connection.query("SELECT * FROM usertablename WHERE usertablename.active=1 AND usertablename.lockedout=0 AND usertablename.email=? LIMIT 1",[userenteredemail],function(err, rows, fields){
				
					if (err) {

						// Log database errors
						console.log(" 503 Service unavailable (database error)");
						errData = {"Status": "503", "Error": "Service unavailable (database error)"};
						res.statusCode = 503;
						res.send(errData);	
						res.end();

					} else {
						
						// Do we have  a recordset
						if (rows.length != 0) {
							
							// Check password hash - https://github.com/ncb000gt/node.bcrypt.js
							var salt = rows[0].password_salt;
							var hash = rows[0].password_hash; 		// bcrypt.hashSync(pass, salt);
							
							hash = hash.replace(/^\$2y(.+)$/i, '\$2a$1');
							var passwordok = false;
							console.log(' About to check password.');
							
							passwordok = bcrypt.compareSync(userenteredpass, hash);

							console.log(passwordok);
							if (passwordok == true) {
							
							// Allow access to records
								var userid = rows[0].id;									// User ID
								var user_guid = rows[0].user_guid;							// Internal App Guid
								var push_guid = rows[0].push_guid;							// Internal GUID user for Push Notifications
								
								
								var newkey = rows[0].usertokenkey;								// Generate a new API secret key
								var newpass = rows[0].usertokenpassword;								// Generate a a new API secret pass

								// This will create two guids to use for all future api hiots from the user
								if (resetsessions == "Yes") {
									newkey = uuid.v4();										// Generate a new API secret key
									newpass = uuid.v4();									// Generate a a new API secret pass
								}

								var stat_total_logins = rows[0].stat_total_logins;			// Read total Logins are stored in my usedatabase
									stat_total_logins = stat_total_logins + 1;				// Increment total logins for later saving 
								
								var username = rows[0].username;							// Read the user Username from the database
								var usertag = rows[0].usertagline;							// Read the user Tagling from the database
								var email = rows[0].email;									// Read the user Email from the database
								
								var block_transactions = rows[0].block_transactions;		// Has the User Blocked Transactions
								var credit = rows[0].credit;								// Credit Remaining
								var currency = rows[0].currency;							// Users Currency (used for the log)

								var smssendalertatlogin = rows[0].user_pref_sms_alert_at_login;  	// Does the user want SMS Sent a Login
								var emailsendalertatlogin = rows[0].user_pref_email_alert_at_login; 	// Does the user want an email
								var pushsendalertatlogin = rows[0].user_pref_push_alert_at_login;	// Push alert the user at login
								
								var user_pref_timezone = rows[0].user_pref_timezone;
								var user_pref_timezone_int = rows[0].user_pref_timezone_int;
								
								var stat_total_password_resets = rows[0].stat_total_password_resets;		// Pass this throgh to the App
								
								var active = rows[0].active;								// Is the user active
								var lockedout = rows[0].lockedout;							// Is the user locked ut
								var changing_password = rows[0].changing_password;

								var querylimit5min = rows[0].QueryLimit5Min;				// Read from he databse how manu API hits the user has a minute
								
								// I like to return the api version and version int with all api JSON payloads (makes checking on the App side easier)
								var jsonvalid = "yes";
								var jsonversion = "2017.01.01.22.00";
								var jsonversionint = "102";
								
								//Update API Keys
								mysql_pool.getConnection(function(err, connection) {
									if (err) {
										console.log(' mysql_pool.release()');
										connection.release();
								  		console.log(' Error getting mysql_pool connection: ' + err);
								  		throw err;
								  	}
								  	// Update the users logins and new keys if thye asked to wipe all past logins.
									connection.query('UPDATE usertablename SET usertokenkey = ?, usertokenpassword = ?, stat_total_logins = ?, last_login = NOW() WHERE id = ?', [newkey, newpass, stat_total_logins, userid],function(errupdate, rowsupdate, fieldsupdate){
										if (errupdate) {
											console.log(" 503 service unavailable (database update error)");
											errData = {"Status": "503", "Error": "503 service unavailable (database update error)"};
											res.statusCode = 503;
											res.send(errData);
											res.end();
										} else {
											if (rowsupdate.length != 0) {
												// Inform User of Valid  Login
												res.statusCode = 200;
												data["Data"] = "Successfully logged in..";									// statis string thta reports the successfull login
												
												// The Key and Pass are used liek OAUTH tokens and are stored in the app and user database. All API hits check these tokens. 
												data["Key"] = newkey;                       								//LOADS IN XCODE v0.9.1
												data["Pass"] = newpass;                      								//LOADS IN XCODE v0.9.1
												
												// User guids are used to store the unique user rather than the database record ID
												data["user_guid"] = user_guid;
												data["push_guid"] = push_guid;												// Internal GUID user for Push Notifications

												data["Logins"] = stat_total_logins;											// Total Logins are stored in my usedatabase
												data["Username"] = username;												// Username
												data["Email"] = email;														// Email
												
												// The notify user code ay ogin has been removed byt can be called here
												data["SMSAtLogin"] = smssendalertatlogin;
												data["EmailAtLogin"] = emailsendalertatlogin;
												data["PushAtLogin"] = pushsendalertatlogin;
												
												// What is the users timezome
												data["timezone"] = user_pref_timezone;
												data["timezoneint"] = user_pref_timezone_int;
												
												
												data["stat_total_password_resets"] = stat_total_password_resets;			// How many times has the user changed passwords
												
												data["active"] = active;													// Is the user active
												data["lockedout"] = lockedout;												// Is th user currently locked out
												data["changing_password"] = changing_password;								// Is the user currebtky changing a password
												
												data["levelquerylimit5min"] = querylimit5min;								// How any times can the user hit the API (handled by the app)
	
												data["jsonvalid"] = jsonvalid;												// JSON ID
												data["jsonversion"] = jsonversion;											// JSON Version
												data["jsonversionint"] = jsonversionint;									// JSON Version
												
												// Send the JSON back to the user
												res.json(data);
												res.end();
												
												//Login Confirmed
												console.log("User " + username + " / " + email + " logged in.");
												
											} else {
												console.log(" 504 service unavailable (database update error)");
												errData = {"Status": "504", "Error": "504 service unavailable (database update error)"};
												res.statusCode = 504;
												res.send(errData);
												res.end();		
											}
										}
										console.log(' mysql_pool.release()');
										connection.release();				// release the MySQL connection pool connection
									});	
								});
								
							} else {
								console.log('402 Invalid Login (' +  req.body.email + ', ' + req.body.password + ')');
								errData = {"Status": "402", "Error": "Invalid Login"};
								res.statusCode = 402;
								res.send(errData);
								res.end();								
							}
	
						} else {
							console.log('403 Account is locked out, not active or unknown. Please visit https://www.myappname.com to reactivate your account. (' +  req.body.email + ')');
							errData = {"Status": "403", "Error": "Account is locked out, not active or unknown."};
							res.statusCode = 403;
							res.send(errData);
							res.end();	
						}
					}
					console.log(' mysql_pool.release()');
					connection.release();
					
				});
			});

		} else {
			console.log("499 token required");
			errData = {"Status": "499", "Error": "Invalid app token. Please update your app (error 499)."};
			res.statusCode = 409;
			res.send(errData);	
			res.end();
		}
	}
});

/api/myappname/v1/verifylogin verify login function (app.js).

// Verify Login ----------------------------------------------------------------------- 
app.post('/api/myappname/v1/verifylogin',function(req,res){
	console.log('API CALL: /api/myappname/v1/verifylogin');
	
	// check to see if the App included an api key
	var appaccesstoken = req.headers['x-access-token'];

	var errData = "";
	if (appaccesstoken == undefined) {
		console.log(" 499 token required");
		errData = {"Status": "499", "Error": "valid token required"};
		res.statusCode = 499;
		res.send(errData);
		res.end();

	} else {

	//console.log(" Received appaccesstoken: " + appaccesstoken);
		var appaccesstokenpassed = false;
		// Check the API Key against known API Keys
		if (appaccesstoken == "ics201701-1d95-9271-a6c7-a9a33770287") { appaccesstokenpassed = true;} // Debug API Key
		if (appaccesstokenpassed == true) {
			var key = req.body.key;
			var pass = req.body.pass;
			var data = {} 

			console.log(" - Key: " + key )
			console.log(" - Pass: " + pass )
			
			// Grab the user IP and log it
			var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
			console.log(" - IP: " + ip );
			
			mysql_pool.getConnection(function(err, connection) {
			if (err) {
				console.log(' mysql_pool.release()');
				connection.release();
		  		console.log(' Error getting mysql_pool connection: ' + err);
		  		throw err;
		  	}
		  	// Check the user record (only if the user is active and provided thier token key and passowrd)
		    connection.query("SELECT * FROM usertablename WHERE active=1 AND usertokenkey=? AND usertokenpassword=? LIMIT 1",[key, pass],function(err, rows, fields){
				if (err) {
					console.log(" 405 Login Session Expired");
					errData = {"Status": "405", "Error": "Login Session Expired"};
					res.statusCode = 405;
					res.send(errData);	
					res.end();
				} else {
					if (rows.length != 0) {
						console.log(" 200 Login Session OK");
						res.statusCode = 200;
						data["Status"] = "OK";
						res.json(data);
						res.end();
					} else {
						console.log(" 405 Login Session Expired");
						errData = {"Status": "405", "Error": "Login Session Expired"};
						res.statusCode = 405;
						res.send(errData);	
						res.end();
					}
				}
				console.log(' mysql_pool.release()');
				connection.release();
			});
			});
		} else {
			console.log("499 token required");
			errData = {"Status": "499", "Error": "valid token required"};
			res.statusCode = 409;
			res.send(errData);	
			res.end();
		}
	}
});

Start the web service (app.js).

http.listen(3000,function(){
	console.log("Connected & Listen to port 3000 at /api/myappname/v1/ ..");
	console.log(" Do..");
	
});

Hope this helps

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.0 Initial Post

Filed Under: API Tagged With: code, MySQL, NodeJS

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT