• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Cloud

HomePi – Raspberry PI powered touch screen showing information from house-wide sensors

March 14, 2022 by Simon

This post is a work in progress (14/3/2022, v0.9.63 – PCB’s v0.2 Designed and Ordered

Summary

After watching this video from Jeff Geerling (demonstrating how to build a Air Quality Sensor) I have decided to make 2. but why not build something bigger?

I want to make a RaspBerry Pi server with a touch screen to receive data from a dozen other WeMos Sensors that I will build.

The Plan

Below is a rough plan of what I am building

In a nutshell, it will be 20x WeMos Sensors recording

Picture of20x WeMoss Sensors, weather station and co2 sensors talking to an api that saves to MySQL then mysql being ready buy a webpage and touch screen panel

I ordered all the parts from Amazon, BangGood, AliExpress, eBay, Core Electronics and Kogan.

Fresh Bullseye Install (Buster upgrade failed)

On 21/11/2021 I tried to manually update Buster to Bullseye (without taking a backup first (bad idea)). I followed this guide to reinstall Rasbian from scratch (this with Bullseye)

Storage Type

Before I begin I need to decide on what storage media to use on the Raspberry Pi. I hate how unreliable and slow MicroSD cards. I tried using an old 128GB SATA SSD, a 1TB Magnetic Hard Drive, a SATA M.2 SSD and NVME M.2 in a USB caddy.

I decided to use a spare 250GB SATA based M.2 Solid State from my son’s PC in Geekworm X862 SATA M.21 Expansion board.

With this board I can bolt the M.2 Solid State Drive into a expansion board under the pi and Power it from the RaspBerry Pi USB Port.

Nice and tidy

I zip-tied a fan to the side of the boards to add a little extra airflow over the solid state drive

32Bit, 64Bit, Linux or Windows

Before I begin I set up Raspbian on an empty Micro SD card (just to boot it up and flash the firmware to the latest version). This is very easy and documented elsewhere. I needed the latest firmware to ensure boort from USB Drive (not Micro SD card was working).

I ran rpi-update and flashed the latest firmware onto my Raspberry Pi. Good, write up here.

When my Raspberry Pi had the latest firmware I used the Raspberry Pi Imager to install the 32 Bit Raspberry Pi OS.

I do have a 8GB Raspberry Pi 4 B, 64Bit Operating Systems do exist but I stuck with 32 bit for compatibility.

Ubuntu 64bit for Raspberry Pi Links

  • Install Ubuntu on a Raspberry Pi | Ubuntu
    • Server Setup Links
      • How to install Ubuntu Server on your Raspberry Pi | Ubuntu
    • Desktop Setup Links
      • How to install Ubuntu Desktop on Raspberry Pi 4 | Ubuntu

Windows 10 for Raspberry Pi Links
https://docs.microsoft.com/en-us/windows/iot-core/tutorials/quickstarter/prototypeboards
https://docs.microsoft.com/en-us/answers/questions/492917/how-to-install-windows-10-iot-core-on-raspberry-pi.html
https://docs.microsoft.com/en-us/windows/iot/iot-enterprise/getting_started

Windows 11 for Raspberry Pi Links
https://www.youtube.com/user/leepspvideo
https://www.youtube.com/watch?v=WqFr56oohCE
https://www.worproject.ml

Setting up the Raspberry Pi Server

These are the steps I used to setup my Pi

Dedicated IP

Before I began I ran ifconfig on my Pi to obtain my Raspberry Pi’s wireless cards mac address. I logged into my Router and setup a dedicated IP (192.168.0.50), this way I can have a IP address thta remains the same.

Hostname

I set my hostname here

sudo nano /etc/hosts
sudo nano /etc/hostname

I verified my hostname with this command

hostname

I verified my IP address with this command

hostname -I

Samba Share

I setup the Samba service to allow me to copy files to and from the Pi

sudo apt-get install samba samba-common-bin
sudo apt-get update

I made a folder to share files

 mkdir ~/share

I edited the Samba config file

sudo nano /etc/samba/smb.conf

In the config file I set my workgroup settings


workgroup = Hyrule
wins support = yes

I defined a share at the bottom of the config file (and saved)

[PiShare]
comment=Raspberry Pi Share
path=/home/pi/share
browseable=Yes
writeable=Yes
only guest=no
create mask=0777
directory mask=0777
public=no

I set a smb password

sudo smbpasswd -a pi
New SMB password: ********
Retype new SMB password: ********

I tested the share froma Windows PC

And the share is accessible on the Raspberry Pi

Great, now I can share files with drag and drop (instead of via SCP)

Mono

I know how to code C# Windows Executables, I have 25 years experince. I do nt want to learn Java or Python to code a GUI application for a touch screen if possible.

I setup Mono from Home | Mono (mono-project.com) to be anbe to run Windows C# EXE’s on Rasbian

sudo apt install apt-transport-https dirmngr gnupg ca-certificates

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb https://download.mono-project.com/repo/debian stable-raspbianbuster main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list

sudo apt update

sudo apt install mono-devel

I copied an EXE I wrote in C# on Windows and ran it with Mono

sudo mono ~/HelloWorld.exe
Exe Test OK

This worked.

Nginx Web Server

I Installed NginX and configured it

sudo apt-get install nginx

I created a /www folder for nginx

sudo mkdir /www

I created a place-holder file in the www root

sudo nano /wwww/index.html

I set permissions to allow Nginx to access /www

sudo chown -R www-data:www-data /www

I edited the NginX config as required

sudo nano /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/nginx.conf 

I tested and reloaded the nginx config


sudo nginx -t
sudo nginx -s reload
sudo systemctl start nginx

I started NginX

sudo systemctl start nginx

I tested nginx in a web browser

NodeJS/NPM

I installed NodeJS

sudo apt update
sudo apt install nodejs npm -y

I verified Node was installed

nodejs --version
> v12.22.5

PHP

I installed PHP

sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update

sudo apt install -y php8.0-common php8.0-cli php8.0-xml

I verified PHP with this command

php --version

> PHP 8.0.13 (cli) (built: Nov 19 2021 06:40:53) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.13, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.13, Copyright (c), by Zend Technologies

I installed PHP-FPM

sudo apt-get install php8.0-fpm

I verified the PHP FPM sock was available before adding it to the NGINX Config

sudo ls /var/run/php*/**.sock
> /var/run/php/php8.0-fpm.sock  /var/run/php/php-fpm.sock

I reviewed PHP Settings

sudo nano /etc/php/8.0/cli/php.ini 
sudo nano /etc/php/8.0/fpm/php.ini

I created a /www/ppp.php file with this contents

<?php
  phpinfo(); // all info
  // module info, phpinfo(8) identical
  phpinfo(INFO_MODULES);
?>

PHP is working

PHP Test OK

I changed these php.ini settings (fine for local development).

max_input_vars = 1000
memory_limit = 1024M
max_file_uploads = 20M
post_max_size = 20M
display_errors = on

MySQL Database

I installed MariaDB

sudo apt install mariadb-server

I updated my pi password

passwd

I ran the Secure MariaDB Program

sudo mysql_secure_installation

After setting each setting I want to run mysql as root to test mysql

PHPMyAdmin

I installed phpmyadmin to be able to edit mysql databases via the web

I followed this guide to setup phpmyadmin via lighthttp and then via nginx

I then logged into MySQL, set user permissions, create a test database and changes settings as required.

NginX to NodeJS API Proxy

I edited my NginX config to create a NodeAPI Proxy

Test Webpage/API

Todo

I installed PM2 the NodeJS agent software

sudo npm install -g pm2 

Node apps can be started as a service from cli

pm2 start api_v1.js

PM2 status

pm2 status

You can delete node apps from PM2 (if desired)

pm2 delete api_v1.js

Sending Email from CLI

I setup send email to allow emails to be sent from the cli with these commands

 sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail  

I logged into my GSuite account and setup an alias and app password to use.

Now I can send emails from the CLI

sudo sendemail -f [email protected] -t [email protected] -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp **************

I added this to a Bash script (“/Scripts/Up.sh”) and added an event to send an email every 6 hours

7 Inch Full View LCD IPS Touch Screen 1024*600 

I purchased a 7″ Touch screen from Banggood. I got a head up from the following Video.

I plugged in the touch USB cable to my Pi’s USB3 port. I pliugged the HDMI adapter into the screen and the pi (with the supplied mini plug).

I turned on the pi and it work’s and looks amazing.

This is displaying a demo C# app I wrote. It’s running via mono.

I did have to add the following to config.txt to bet native resolution. The manual on the supplied CD was helpful (but I did not check it at first).

max_usb_current=1
hdmi_force_hotplug=1
config_hdmi_boost=7
hdmi_group=2
hdmi_mode=1
hdmi_mode=87 
hdmi_drive=1
display_rotate=0
hdmi_cvt 1024 600 60 6 0 0 0
framebuffer_width=1024
framebuffer_height=600

PiJuice UPS HAT

I purchased an external LiPi UPS to keep the raspberry pi fed with power (even when the power goes out)

The stock battery was not charged and was quite weak when I first installed it. Do fully charge the battery before testing.

PiJuice

Stock Battery = 3.7V @ 1820mAh

Stock Battery = 3.7V @ 1820mAh

Below are screenshots so the PIJuice Setup.

PiJuice HAT Settings

PiJuice General Settings

General Settings

There is an option to set events for every button

Extensive screen to set button events

LED Status color and function

Set LED status and color

IO for the PiJuice Input. I will sort this out later.

PiJuice IO settings

A new firmware was available. I had v1.4

Update firmware screen

I updated the firmware

Firmware update worked

Firmware flash success

Battery settings

Battery Settings

PiJuice Button Config

Button config

Wake Up Alarm time and RTC

Clock Settings

System Settings

System Settings

System Events

system settings page

User Scripts

Define user scripts

I ordered a bigger battery as my Screen, M.2, Fan and UPS consume near the maximum of the stock battery.

10,000mAh battery

After talking with the seller of the battery they advised I setup the 10,000mAh battery using the 1,000mAh battery setup in PiJuice but change the Capacity and Charge Current

  • Capacity = 10000C
  •  cutoff voltage

And for battery longevity set the 

  • Cutoff voltage: 3,250mv

Final Battery Setup

Battery settings based off 1000mAh battery profile , Capacity 10,000 mAh, Charge current 850 and Cutoff 3250mV

WeMos Setup

I orderd 20x Wemos Mini D1 Pro (16Mbit) clones to use to run the sensors. I soldered the legs on in batches of 8

WeMos installed on breadboards ready to solder pins

Soldering was not perfect but worked

20x soldered wemos

Soldering is not perfect but each joint was triple tested.

Close up of soldered joints

I only had one dead WeMos.

I will set up the final units on ProtoBoards.

Protoboard

20x Wemos ready for service and the external aerial is glued down. The hot glue was a bad idea, I had to rotate a resistor under the hot glue.

20x wemos ready.

Revision

I ended up reordering the WeMos Mini’s and soldering on Female headers so I can add OLED screens

air mon enclosure

I added female headers to allow an OLED screen

new wemos

I purchased a microscope tpo be able to see better.

microscope

Each sensor will have a mini OLED screen.

mini oled screen

0.66″ OLED Screens

oled screen

I designed a PCB in Photoshop and had it turned into a PCB via https://www.fiverr.com/syedzamin12. I ordered 30x bloards from https://jlcpcb.com/

Custom PCB

The PCB’s fit inside the new enclosure perfectly

I am waiting for smaller screws to arrive.

PCB v0.2

I decided to design a board with 2 switches (and a light sensor to turn the screen off at night)

Breadboard Prototype

Prototype

I spoke to https://www.fiverr.com/syedzamin12 and withing 24 hours a PCB was designed

I Layers

This time I will get a purple PCB from JLCPCB and add a dinosaur for my son

Top PCB View

TOP PCB View

Back PCB View

Back PCB View

3D PC View

3D PCB view

JLCPCB made the board in 3 days

3 days

Now I need to wait a few weeks for the new PCB to arrive

Also, I finsihed the firmware for v0.2 PCB

I ordered some switches

I also ordered some reset buttons

I might add a larger 0.96″ OLED screen

Wifi and Static IP Test

I uploaded a skepch to each WeMos and tested the Wifi and Static IP thta was given.

Sketch

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>


#define SERVER_IP "192.168.0.50"

#ifndef STASSID
#define STASSID "wifi_ssid_name"
#define STAPSK  "************"
#endif

void setup() {

  Serial.begin(115200);

  Serial.println();
  Serial.println();
  Serial.println();

  WiFi.begin(STASSID, STAPSK);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected! IP address: ");
  Serial.println(WiFi.localIP());

}

void loop() {
  // wait for WiFi connection
  if ((WiFi.status() == WL_CONNECTED)) {

    WiFiClient client;
    HTTPClient http;

    Serial.print("[HTTP] begin...\n");
    // configure traged server and url
    http.begin(client, "http://" SERVER_IP "/api/v1/test"); //HTTP
    http.addHeader("Content-Type", "application/json");

    Serial.print("[HTTP] POST...\n");
    // start connection and send HTTP header and body
    int httpCode = http.POST("{\"hello\":\"world\"}");

    // httpCode will be negative on error
    if (httpCode > 0) {
      // HTTP header has been send and Server response header has been handled
      Serial.printf("[HTTP] POST... code: %d\n", httpCode);

      // file found at server
      if (httpCode == HTTP_CODE_OK) {
        const String& payload = http.getString();
        Serial.println("received payload:\n<<");
        Serial.println(payload);
        Serial.println(">>");
      }
    } else {
      Serial.printf("[HTTP] POST... failed, error: %s\n", http.errorToString(httpCode).c_str());
    }

    http.end();
  }

  delay(1000);
}

The Wemos booted, connected to WiFi, set and IP, and tried to post a request to a URL.

........................................................
Connected! IP address: 192.168.0.51
[HTTP] begin...
[HTTP] POST...
[HTTP] POST... failed, error: connection failed

The POST failed because my PI API Server was off.

Touch Screen Enclosure

I constructed a basic enclosure and screwed the touch screen to it. I need to find  aflexible black scrip to put around the screen and cover up the gaps.

Wooden box with the screen in it

The touch screen has been screwed in.

Screen screwed in

Over the Air Updating

I followed this guide and having the WeMos updatable over WiFi.

Basically, I installed the libraries “AsyncHTTPSRequest_Generic”, “AsyncElegantOTA”, “AsyncHTTPRequest_Generic”, “ESPAsyncTCP” and “ESPAsyncWebServer”.

Manage Libraries

A few libraries would not download so I manually downloaded the code from the GitHub repository from Confirm your account recovery settings (github.com) and then extracted them to my Documents\Arduino\libraries folder.

I then opened the exampel project “AsyncElegantOTA\ESP8266_Async_Demo”

I reviewed the code

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>

const char* ssid = "........";
const char* password = "........";

AsyncWebServer server(80);


void setup(void) {
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request) {
    request->send(200, "text/plain", "Hi! I am ESP8266.");
  });

  AsyncElegantOTA.begin(&server);    // Start ElegantOTA
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();
}
I added my Wifi SSID and password, saved the project and compiled a the code and wrote it to my WeMos Mini D1

I added LED Blink Code

void setup(void) {
  ...
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  ...
}
void loop(void) {
 ...
  delay(1000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(1000);                      // Wait for two seconds (to demonstrate the active low LED)
 ...
}

I compiled and tested the code

Now to get new code changes to the WeMos Mini via a binary, I edited the code (chnaged the LED blink speed) and clicked “Export Compiled Binary”

Compole Binary

When the binary compiled I opened the Sketch Folder

Show Sketch folder

I could see a bin file.

Bin File

I loaded the http://192.168.0.51/update and selected the bin file.

The new firmwaere applied.

Flashing

I navighated back to http://192.168.0.51

TIP: Ensure you add the starter sketch that has your wifi details in there.

Password Protection

I changed the code to add a basic passeord on access ad on OTA update

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>


//Saved Wifi Credentials (Research Encruption Later or store in FRAM Module?
const char* ssid = "your-wifi-ssid";
const char* password = "********";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(80);

void setup(void) {
  Serial.begin(115200);       //Serial Mode (Debug)
    
  WiFi.mode(WIFI_STA);        //Client Mode
  WiFi.begin(ssid, password); //Connect to Wifi
 
  Serial.println("");

  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success! ESP8266 #001.");
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);

  
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();

  digitalWrite(LED_BUILTIN, LOW);
  delay(8000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(8000);                      // Wait for two seconds (to demonstrate the active low LED)

}

Password prompt for users accessing the device.

Login scree

Password prompt for admin users accessing the device.

admin password protect

Later I will research encrypting the password and storing it on SPIFFS partition or a FRAM memory module.

Adding the DHT22 Sensors

I received my paxckl of DHT22 Sensors (AMT2302).

Specifications

  • Operating Voltage: 3.5V to 5.5V
  • Operating current: 0.3mA (measuring) 60uA (standby)
  • Output: Serial data
  • Temperature Range: 0°C to 50°C
  • Humidity Range: 20% to 90%
  • Resolution: Temperature and Humidity both are 16-bit
  • Accuracy: ±1°C and ±1%

I wired it up based on this Adafruit post.

DHT22 Wired Up on a breadboard.

DHT22 and Basic API Working

I will not bore you with hours or coding and debugging so here is my code thta

  • Allows the WeMos D1 Mini Prpo (ESP8266) to connect to WiFi
  • Web Server (with stats)
  • Admin page for OTA updates
  • Password Prpotects the main web folder and OTA admin page
  • Reading DHT Sensor values
  • Debug to serial Toggle
  • LED activity Toggle
  • Json Serialization
  • POST DHT22 data to an API on the Raspberry PI
  • Placeholder for API return values
  • Automatically posts data to the API ever 10 seconds
  • etc

Here is the work in progress ESP8288 Code

#include <ESP8266WiFi.h>        // https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/src/ESP8266WiFi.h
#include <ESPAsyncTCP.h>        // https://github.com/me-no-dev/ESPAsyncTCP
#include <ESPAsyncWebServer.h>  // https://github.com/me-no-dev/ESPAsyncWebServer
#include <AsyncElegantOTA.h>    // https://github.com/ayushsharma82/AsyncElegantOTA
#include <ArduinoJson.h>        // https://github.com/bblanchon/ArduinoJson
#include "DHT.h"                // https://github.com/adafruit/DHT-sensor-library
                                // Written by ladyada, public domain

//Todo: Add Authentication
//Fyi: https://api.gov.au/standards/national_api_standards/index.html

#include <ESP8266HTTPClient.h>  //POST Client

//Firmware Stats
bool bDEBUG = true;        //true = debug to Serial output
                           //false = no serial output
//Port Number for the Web Server
int WEB_PORT_NUMBER = 1337; 

//Post Sensor Data Delay
int POST_DATA_DELAY = 10000; 

bool bLEDS = true;         //true = Flash LED
                           //false =   NO LED's
//Device Variables
String sDeviceName = "ESP-002";
String sFirmwareVersion = "v0.1.0";
String sFirmwareDate = "27/10/2021 23:00";

String POST_SERVER_IP = "192.168.0.50";
String POST_SERVER_PORT = "";
String POST_ENDPOINT = "/api/v1/test";

//Saved Wifi Credentials (Research Encryption later and store in FRAM Module?
const char* ssid = "your_wifi_ssid";
const char* password = "***************";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(WEB_PORT_NUMBER);    //Feel free to chnage the port number

//DHT22 Temp Sensor
#define DHTTYPE DHT22   // DHT 22  (AM2302), AM2321
#define DHTPIN 5
DHT dht(DHTPIN, DHTTYPE);

//Common Variables
String thisBoard = ARDUINO_BOARD;
String sHumidity = "";
String sTempC = "";
String sTempF = "";
String sJSON = "{ }";

//DHT Variables
float h;
float t;
float f;
float hif;
float hic;


void setup(void) {

  //Turn On PIN
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  
  //Serial Mode (Debug)
  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  if (bDEBUG) Serial.begin(115200);
  if (bDEBUG) Serial.println("Serial Begin");

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }
  if (bDEBUG) Serial.println("Wifi Setup");
  if (bDEBUG) Serial.println(" - Client Mode");
  
  WiFi.mode(WIFI_STA);        //Client Mode
  
  if (bDEBUG) Serial.print(" - Connecting to Wifi: " + String(ssid));
  WiFi.begin(ssid, password); //Connect to Wifi
 
  if (bDEBUG) Serial.println("");
  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    if (bDEBUG) Serial.print(".");
  }
  if (bDEBUG) Serial.println("");
  if (bDEBUG) Serial.print("- Connected to ");
  if (bDEBUG) Serial.println(ssid);
  
  if (bDEBUG) Serial.print("IP address: ");
  if (bDEBUG) Serial.println(WiFi.localIP());

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  
  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    
        String sendHtml = "";
        sendHtml = sendHtml + "<html>\n";
        sendHtml = sendHtml + " <head>\n";
        sendHtml = sendHtml + " <title>ESP# 002</title>\n";
        sendHtml = sendHtml + " <meta http-equiv=\"refresh\" content=\"5\";>\n";
        sendHtml = sendHtml + " </head>\n";
        sendHtml = sendHtml + " <body>\n";
        sendHtml = sendHtml + " <h1>ESP# 002</h1>\n";
        sendHtml = sendHtml + " <u2>Debug</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Device Name: " + sDeviceName + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Version: " + sFirmwareVersion + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Date: " + sFirmwareDate + " </li>\n";
        sendHtml = sendHtml + " <li>Board: " + thisBoard + " </li>\n";
        sendHtml = sendHtml + " <li>Auto Refresh Root: On </li>\n";
        sendHtml = sendHtml + " <li>Web Port Number: " + String(WEB_PORT_NUMBER) +" </li>\n";
        sendHtml = sendHtml + " <li>Serial Debug: " + String(bDEBUG) +" </li>\n";
        sendHtml = sendHtml + " <li>Flash LED's Debug: " + String(bLEDS) +" </li>\n";
        sendHtml = sendHtml + " <li>SSID: " + String(ssid) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT TYPE: " + String(DHTTYPE) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT PIN: " + String(DHTPIN) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_DATA_DELAY: " + String(POST_DATA_DELAY) +" </li>\n";

        sendHtml = sendHtml + " <li>POST_SERVER_IP: " + String(POST_SERVER_IP) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_ENDPOINT: " + String(POST_ENDPOINT) +" </li>\n";
        
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>Sensor</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Humidity: " + sHumidity + "% </li>\n";
        sendHtml = sendHtml + " <li>Temp: " + sTempC + "c, " + sTempF + "f. </li>\n";
        sendHtml = sendHtml + " <li>Heat Index: " + String(hic) + "c, " + String(hif) + "f.</li>\n";
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>JSON</h2>";
        
        // Allocate the JSON document Object/Memory
        // Use https://arduinojson.org/v6/assistant to compute the capacity.
        StaticJsonDocument<250> doc;
        //JSON Values     
        doc["Name"] = sDeviceName;
        doc["humidity"] = sHumidity;
        doc["tempc"] = sTempC;
        doc["tempf"] = sTempF;
        doc["heatc"] = String(hic);
        doc["heatf"] = String(hif);
        
        sJSON = "";
        serializeJson(doc, sJSON);
        
        sendHtml = sendHtml + " <ul>" + sJSON + "</ul>\n";
        
        sendHtml = sendHtml + " <u2>Seed</h2>";
        long randNumber = random(100000, 1000000);
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <p>" + String(randNumber) + "</p>\n";
        sendHtml = sendHtml + " </ul>\n";
       
        sendHtml = sendHtml + " </body>\n";
        sendHtml = sendHtml + "</html>\n";
        //Send the HTML   
        request->send(200, "text/html", sendHtml);
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);
  
  server.begin();
  if (bDEBUG) Serial.println("HTTP server started");
 
  if (bDEBUG) Serial.println("Board: " + thisBoard);

  //Setup the DHT22 Object
  dht.begin();
  
}

void loop(void) {

  AsyncElegantOTA.loop();

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }


  //Display Temp and Humidity Data

  h = dht.readHumidity();
  t = dht.readTemperature();
  f = dht.readTemperature(true);

  // Check if any reads failed and exit early (to try again).
  if (isnan(h) || isnan(t) || isnan(f)) {
    if (bDEBUG) Serial.println(F("Failed to read from DHT sensor!"));
    return;
  }
  
  hif = dht.computeHeatIndex(f, h);         // Compute heat index in Fahrenheit (the default)
  hic = dht.computeHeatIndex(t, h, false);  // Compute heat index in Celsius (isFahreheit = false)

  if (bDEBUG) Serial.print(F("Humidity: "));
  if (bDEBUG) Serial.print(h);
  if (bDEBUG) Serial.print(F("%  Temperature: "));
  if (bDEBUG) Serial.print(t);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(f);
  if (bDEBUG) Serial.print(F("°F  Heat index: "));
  if (bDEBUG) Serial.print(hic);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(hif);
  if (bDEBUG) Serial.println(F("°F"));

  //Save for Page Load
  sHumidity = String(h,2);
  sTempC = String(t,2);
  sTempF = String(f,2);

  //Post to Pi API
    // Allocate the JSON document Object/Memory
    // Use https://arduinojson.org/v6/assistant to compute the capacity.
    StaticJsonDocument<250> doc;
    //JSON Values     
    doc["Name"] = sDeviceName;
    doc["humidity"] = sHumidity;
    doc["tempc"] = sTempC;
    doc["tempf"] = sTempF;
    doc["heatc"] = String(hic);
    doc["heatf"] = String(hif);
    
    sJSON = "";
    serializeJson(doc, sJSON);

    //Post to API
    if (bDEBUG) Serial.println(" -> POST TO API: " + sJSON);

   //Test POST
  
    if ((WiFi.status() == WL_CONNECTED)) {
  
      WiFiClient client;
      HTTPClient http;
  
    
      if (bDEBUG) Serial.println(" -> API Endpoint: http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT);
      http.begin(client, "http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT); //HTTP


      if (bDEBUG) Serial.println(" -> addHeader: \"Content-Type\", \"application/json\"");
      http.addHeader("Content-Type", "application/json");
  
      // start connection and send HTTP header and body
      int httpCode = http.POST(sJSON);
      if (bDEBUG) Serial.print("  -> Posted JSON: " + sJSON);
  
      // httpCode will be negative on error
      if (httpCode > 0) {
        // HTTP header has been send and Server response header has been handled

  
        //See https://api.gov.au/standards/national_api_standards/api-response.html 
        // Response from Server
        if (bDEBUG) Serial.println("  <- Return Code: " + httpCode);
                
        //Get the Payload
        const String& payload = http.getString();
          if (bDEBUG) Serial.println("   <- Received Payload:");
          if (bDEBUG) Serial.println(payload);
          if (bDEBUG) Serial.println("   <- Payload (httpcode: 201):");
          

         //Hnadle the HTTP Code
        if (httpCode == 200) {
          if (bDEBUG) Serial.println("  <- 200: Invalid API Call/Response Code");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 201) {
          if (bDEBUG) Serial.println("  <- 201: The resource was created. The Response Location HTTP header SHOULD be returned to indicate where the newly created resource is accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 202) {
          if (bDEBUG) Serial.println("  <- 202: Is used for asynchronous processing to indicate that the server has accepted the request but the result is not available yet. The Response Location HTTP header may be returned to indicate where the created resource will be accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 400) {
          if (bDEBUG) Serial.println("  <- 400: The server cannot process the request (such as malformed request syntax, size too large, invalid request message framing, or deceptive request routing, invalid values in the request) For example, the API requires a numerical identifier and the client sent a text value instead, the server will return this status code.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 401) {
          if (bDEBUG) Serial.println("  <- 401: The request could not be authenticated.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 403) {
          if (bDEBUG) Serial.println("  <- 403: The request was authenticated but is not authorised to access the resource.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 404) {
          if (bDEBUG) Serial.println("  <- 404: The resource was not found.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 415) {
          if (bDEBUG) Serial.println("  <- 415: This status code indicates that the server refuses to accept the request because the content type specified in the request is not supported by the server");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 422) {
          if (bDEBUG) Serial.println("  <- 422: This status code indicates that the server received the request but it did not fulfil the requirements of the back end. An example is a mandatory field was not provided in the payload.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 500) {
          if (bDEBUG) Serial.println("  <- 500: An internal server error. The response body may contain error messages.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }

        
      } else {
        if (bDEBUG) Serial.println("   <- Unknown Return Code (ERROR): " + httpCode);
        //if (bDEBUG) Serial.printf("    " + http.errorToString(httpCode).c_str());
        
      }

    }

    if (bDEBUG) Serial.print("\n\n");

    delay(POST_DATA_DELAY);
  }

Here is a screenshot of the Arduino IDE Serial Monitor debugging the code

Serial Monitor

Here is a screenshot of the NodeJS API on the raspberry Pi accepting the POSTed data from the ESP8266

API receiving data

Here is a sneak peek of the code accpeing the Posted Data

API COde

The final code will be open sourced.

API with 2x sensors (18x more soon)

I built 2 sensors (on Breadboards) to start hitting the API

2 sensors on a breadboard

18 more sensors are ready for action (after I get tempporary USB power sorted)

18x Sensors

PiJuice and Battery save the Day

I accidentally used my Pi for a few hours (to develop the API) and I realised the power to the PiJuice was not connected.

The PiJuice worked a treat and supplied the Pi from battery

Battery power was disconnected

I plugged in the battery after 25% was drained.

Power Restored/

Research and Setup TRIM/Defrag on the M.2 SSD

Todo: Research

Add a Buzzer to the RaspBerry Pi and Connect to Pi Juice No Power Event

Todo

Wire Up a Speaker to the PiJuice

Todo: Figure out cusrom scripts and add a Piezo Speaker to the PiJuice to alert me of issues in future.

Add buttons to the enclosure

Todo

Add email alerts from the system

I logged into Google G-Suite (my domain’s email provider) and set up an email alias for my domain “[email protected]”, I added this alias to GMail (logged in with my GSuite account.

I created an app-specific password at G-Suite to allow my poi to use a dedicated password to access my email.

I installed these packages on the Raspberry Pi

sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail    

I can run this command to send an email to my primary email

sudo sendemail -f [email protected] -t [email protected]_domain.com -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected]_domain.com -xp ********************

The email arrives from the Raspberry Pi

Test Email Screenshot

PiJuice Alerts (email)

In created some python scripts and configured PiJuice to Email Me

user scripts

I assigned the scripts to Events

Added functions

Python Script (CronJob) to email the batteruy level every 6 hours

Todo

Building the Co2/PM2.5 Sensors

Todo: (Waiting for parts)

The AirGradient PCB’s have arrived

Air Gradient PCB's

NodeJS API writing to MySQL/Influx etc

Todo: Save Data to a Database

Setup 20x WeMos External Antennae’s (DONE, I ordered new factory rotated resistors)

I assumed the external antennae’s on the WeMos D1 Mini Pro’s were using the external antennae. Wrong.

I have to move 20x resistors (1 per WeMos) to switch over the the external antennae.

This will be fun as I added hot glue over the area where the resistior is to hold down the antennae.

Reading configuration files via SPIFFS

Todo

Power over Ethernet (PoE) (SKIP, WIll use plain old USB wall plugs)

Todo: Passive por PoE

Building a C# GUI for the Touch Panel

Todo (Started coding this)

Todo (Passive POE, 5v, 3.3v)?

Building the enclosures for the sensorsDesigned and ordered the PCB, FIrmware next.

Custom PCB?

Yes, See above

Backing up the Raspberry Pi M.2 Drive

This is quite easy as the M.2 Drive is connected to a USB Pliug. I shutdown the Pi and pugged int he M.2 board to my PC

I then Backed up the entire disk to my PC with Acronis Software (review here)

I now have a complete backup of my Pi on a remote network share (and my primary pc).

Version History

v0.9.63 – PCB v0.2 Designed and orderd.

v0.9.62 – 3/2/2022 Update

v0.9.61 – New Nginx, PHP, MySQL etc

v0.9.60 – Fresh Bullseye install (Buster upgrade failed)

v0.951 Email Code in PiJUice

v0.95 Added Email Code

v0.94 Added Todo Areas.

v0.93 2x Sensors hitting the API, 18x sensors ready, Air Gradient

v0.92 DHT22 and Basic API

v0.91 Password Protection

v0.9 Final Battery Setup

v0.8 OTA Updates

v0.7 Screen Enclosure

v0.6 Added Wifi Test Info

v0.5 Initial Post

Filed Under: Analytics, API, Arduino, Cloud, Code, GUI, IoT, Linux, MySQL, NGINX, NodeJS, OS Tagged With: api, ESP8266, MySQL, nginx, raspberry pi, WeMos

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.

December 22, 2020 by Simon

I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance. Here is what I did to set up a complete Ubuntu 18.04 system (NGINX, PHP, MySQL, WordPress etc). This is not a paid review (just me documenting my steps over 2 days).

Background (CPanel hosts)

In 1999 I hosted my first domain (www.fearby.com) on a host in Seattle (for $10 USD a month), the host used CPanel and all was good.  After a decade I was using the domain more for online development and the website was now too slow (I think I was on dial-up or ADSL 1 at the time). I moved my domain to an Australian host (for $25 a month).

After 8 years the domain host was sold and performance remained mediocre. After another year the new host was sold again and performance was terrible.

I started receiving Resource Limit Is Reached warnings (basically this was a plot by the new CPanel host to say “Pay us more and this message will go away”).

Page load times were near 30 seconds.

cpenal_usage_exceeded

The straw that broke the camel’s back was their demand of $150/year for a dodgy SSL certificate.

I needed to move to a self-managed server where I was in control.

Buying a Domain Name

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Self Managed Server

I found a good web IDE ( http://www.c9.io/ ) that allowed me to connect to a cloud VM.  C9 allowed me to open many files and terminal windows and reconnect to them later. Don’t get excited, though, as AWS has purchased C9 and it’s not the same.

C9 IDE

C9 IDE

I spun up a Digital Ocean Server at the closest data centre in Singapore. Here was my setup guide creating a Digital Ocean VM, connecting to it with C9 and configuring it. I moved my email to G Suite and moved my WordPress to Digital Ocean (other guides here and here).

I was happy since I could now send emails via CLI/code, set up free SSL certs, add second domain email to G Suite and Secure G Suite. No more usage limit errors either.

Self-managing servers require more work but it is more rewarding (flexible, faster and cheaper).  Page load times were now near 20 seconds (10-second improvement).

Latency Issue

Over 6 months, performance on Digital Ocean (in Singapore) from Australia started to drop (mentioned here).  I tried upgrading the memory but that did not help (latency was king).

Moved the website to Australia

I moved my domain to Vultr in Australia (guide here and here). All was good for a year until traffic growth started to increase.

Blog Growth

I tried upgrading the memory on Vultr and I setup PHP child workers, set up Cloudflare.

GT Metrix scores were about a “B” and Google Page Speed Scores were in the lower 40’s. Page loads were about 14 seconds (5-second improvement).

Tweaking WordPress

I set up an image compression plugin in WordPress then set up a cloud image compression and CDN Plugin from the same vendor.  Page Speed info here.

GT Metrix scores were now occasionally an “A” and Page Speed scores were in the lower 20’s. Page loads were about 3-5 seconds (10-second improvement).

A mixed bag from Vultr (more optimisation and performance improvements were needed).

This screenshot is showing poor www.gtmetrix.com scores , pool google page speed index scores and upgrading from 1GB to 2GB memory on my server.

Google Chrome Developer Console Audit Results on Vultr hosted website were not very good (I stopped checking as nothing helped).

This is a screenshot showing poor site performance (screenshot taken in Google Dev tools audit feature)

The problem was the Vultr server (400km away in Sydney) was offline (my issue) and everything above (adding more memory, adding 2x CDN’s (EWWW and Cloudflare), adding PHP Child workers etc) did not seem to help???

Enter UpCloud…

Recently, a friend sent a link to a blog article about a host called “UpCloud” who promised “Faster than SSD” performance.  This can’t be right: “Faster than SSD”? I was intrigued. I wanted to check it out as I thought nothing was faster than SSD (well, maybe RAM).

I signed up for a trial and ran a disk IO test (read the review here) and I was shocked. It’s fast. Very fast.

Summary: UpCloud was twice as fast (Disk IO and CPU) as Vultr (+ an optional $4/m firewall and $3/m for 1x backup).

This is a screenshot showing Vultr.com servers getting half the read and write disk io performance compared to upcloud.com.

fyi: Labels above are K Bytes per second. iozone loops through all file size from 4 KB to 16,348 KB and measures through the reads per second. To be honest, the meaning of the numbers doesn’t interest me, I just want to compare apples to apples.

This is am image showing iozone results breakdown chart (kbytes per sec on vertical axis, file size in horizontal axis and transfer size on third access)

(image snip from http://www.iozone.org/ which explains the numbers)

I might have to copy my website on UpCloud and see how fast it is.

Where to Deploy and Pricing

UpCloud Pricing: https://www.upcloud.com/pricing/

UpCloud Pricing

UpCloud does not have a data centre in Australia yet so why choose UpCloud?

Most of my site’s visitors are based in the US and UpCloud have disk IO twice as fast as Vultr (win-win?).  I could deploy to Chicago?

This image sows most of my visitors are in the US

My site’s traffic is growing and I need to ensure the site is fast enough in the future.

This image shows that most of my sites visitors are hitting my site on week days.

Creating an UpCloud VM

I used a friend’s referral code and signed up to create my first VM.

FYI: use my Referral code and get $25 free credit.  Sign up only takes 2 minutes.

https://www.upcloud.com/register/?promo=D84793

When you click the link above you will receive 25$ to try out serves for 3 days. You can exit his trail and deposit $10 into UpCloud.

Trial Limitations

The trial mode restrictions are as following:

* Cloud servers can only be accessed using SSH, RDP, HTTP or HTTPS protocols
* Cloud servers are not allowed to send outgoing e-mails or to create outbound SSH/RDP connections
* The internet connection is restricted to 100 Mbps (compared to 500 Mbps for non-trial accounts)
* After your 72 hours free trial, your services will be deleted unless you make a one-time deposit of $10

UpCloud Links

The UpCloud support page is located here: https://www.upcloud.com/support/

  • Quick start: Introduction to UpCloud
  • How to deploy a Cloud Server
  • Deploy a cloud server with UpCloud’s API

More UpCloud links to read:

  • Two-Factor Authentication on UpCloud
  • Floating IPs on UpCloud
  • How to manage your firewall
  • Finalizing deployment

Signing up to UpCloud

Navigate to https://upcloud.com/signup and add your username, password and email address and click signup.

New UpCloud Signup Page

Add your address and payment details and click proceed (you don’t need to pay anything ($1 may be charged and instantly refunded to verify the card)

Add address and payment details

That’s it, check yout email.

Signup Done

Look for the UpCloud email and click https://my.upcloud.com/

Check Email

Now login

Login to UpCloud

Now I can see a dashboard 🙂

UpCloud Dashboard

I was happy to see 24/7 support is available.

This image shows the www.upcloud.com live chat

I opted in for the new dashboard

UpCloud new new dashboard

Deploy My First UpCloud Server

This is how I deployed a server.

Note: If you are going to deploy a server consider using my referral code and get $25 credit for free.

Under the “deploy a server” widget I named the server and chose a location (I think I was supposed to use an FQDN name -e.g., “fearby.com”). The deployment worked though. I clicked continue, then more options were made available:

  1. Enter a short server description.
  2. Choose a location (Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago)
  3. Choose the number of CPU’s and amount of memory
  4. Specify disk number/names and type (MaxIOPS or HDD).
  5. Choose an Operating System
  6. Select a Timezone
  7. Define SSH Keys for access
  8. Allowed login methods
  9. Choose hardware adapter types
  10. Where the send the login password

Deploy Server

FYI: How to generate a new SSH Key (on OSX or Ubuntu)

ssh-keygen -t rsa

Output

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /temp/example_rsa
Enter passphrase (empty for no passphrase): *********************************
Enter same passphrase again:*********************************
Your identification has been saved in /temp/example_rsa.
Your public key has been saved in /temp/example_rsa.pub.
The key fingerprint is:
SHA256:########################### [email protected]
Outputted public and private key

Did the key export? (yes)

> /temp# ls /temp/ -al
> drwxr-xr-x 2 root root 4096 Jun 9 15:33 .
> drwxr-xr-x 27 root root 4096 Jun 8 14:25 ..
> -rw——- 1 user user 1766 Jun 9 15:33 example_rsa
> -rw-r–r– 1 user user 396 Jun 9 15:33 example_rsa.pub

“example_rsa” is the private key and “example_rsa.pub “is the public key.

  • The public key needs to be added to the server to allow access.
  • The private key needs to be added to any local ssh program used for remote access.

Initialisation script (after deployment)

I was pleased to see an initialization script section that calls actions after the server is deployed. I configured the initialisation script to pull down a few GB of backups from my Vultr website in Sydney (files now removed).

This was my Initialisation script:

#!/bin/bash
echo "Downloading the Vultr websites backups"
mkdir /backup
cd /backup
wget -o www-mysql-backup.sql https://fearby.com/.../www-mysql-backup.sql
wget -o www-blog-backup.zip https://fearby.com/.../www-blog-backup.zip

Confirm and Deploy

I clicked “Confirm and deploy” but I had an alert that said trial mode can only deploy servers up to 1024MB of memory.

This image shows I cant deploy servers with 2/GB in trial modeExiting UpCloud Trial Mode

I opened the dashboard and clicked My Account then Billing, I could see the $25 referral credit but I guess I can’t use that in Trial.

I exited trial mode by depositing $10 (USD).

View Billing Details

Make a manual 1-time deposit of $10 to exit trial mode.

Deposit $10 to exit the trial

FYI: Server prices are listed below (or view prices here).

UpCloud Pricing

Now I can go back and deploy the server with the same settings above (1x CPU, 2GB Memory, Ubuntu 18.04, MaxIOPS Storage etc)

Deployment takes a few minutes and depending on how you specified a password may be emailed to you.

UpCloud Server Deployed

The server is now deployed; now I can connect to it with my SSH program (vSSH).  Simply add the server’s IP, username, password and the SSH private key (generated above) to your ssh program of choice.

fyi: The public key contents start with “ssh-rsa”.

This image shows me connecting to my sever via ssh

I noticed that the initialisation script downloaded my 2+GB of files already. Nice.

UpCloud Billing Breakdown

I can now see on the UpCloud billing page in my dashboard that credit is deducted daily (68c); at this rate, I have 49 days credit left?

Billing Breakdown

I can manually deposit funds or set up automatic payments at any time 🙂

UpCloud Backup Options

You do not need to setup backups but in case you want to roll back (if things stuff up), it is a good idea. Backups are an additional charge.

I have set up automatic daily backups with an auto deletion after 2 days

To view backup scheduled click on your deployed server then click backup

List of UpCloud Backups

Note: Backups are charged at $0.056 for every GB stored – so $5.60 for every 100GB per month (half that for 50GB etc)

You can take manual backups at any time (and only be charged for the hour)

UpCloud Firewall Options

I set up a firewall at UpCloud to only allow the minimum number of ports (UpCloud DNS, HTTP, HTTPS and My IP to port 22).  The firewall feature is charged at $0.0056 an hour ($4.03 a month)

I love the ability to set firewall rules on incoming, destination and outgoing ports.

To view your firewall click on your deployed server then click firewall

UpCloud firewall

Update: I modified my firewall to allow inbound ICMP (IPv4/IPv6) and UDP (IPv4/IPv6) packets.

(Note: Old firewall screenshot)

Firewall Rules Allow port 80, 443 and DNS

Because my internet provider has a dynamic IP, I set up a VPN with a static IP and whitelisted it for backdoor access.

Local Ubuntu ufw Firewall

I duplicated the rules in my local ufw (2nd level) firewall (and blocked mail)

sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] 25                         DENY OUT    Anywhere                   (out)
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 22                         ALLOW IN    REMOVED (MY WHITELISTED IP))
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 25 (v6)                    DENY OUT    Anywhere (v6)              (out)
[10] 53                         ALLOW IN    2a04:3540:53::1
[11] 53                         ALLOW IN    2a04:3544:53::1

UpCloud Download Speeds

I pulled down a 1.8GB Ubuntu 18.08 Desktop ISO 3 times from gigenet.com and the file downloaded in 32 seconds (57MB/sec). Nice.

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:04-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso'

ubuntu-18.04-desktop-amd64.iso 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:02:37 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:02:46-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.1'

ubuntu-18.04-desktop-amd64.iso.1 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:19 (56.6 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.1' saved [1921843200/1921843200]

$/temp# wget http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
--2018-06-08 18:03:23-- http://mirrors.gigenet.com/ubuntu/18.04/ubuntu-18.04-desktop-amd64.iso
Resolving mirrors.gigenet.com (mirrors.gigenet.com)... 69.65.15.34
Connecting to mirrors.gigenet.com (mirrors.gigenet.com)|69.65.15.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1921843200 (1.8G) [application/x-iso9660-image]
Saving to: 'ubuntu-18.04-desktop-amd64.iso.2'

ubuntu-18.04-desktop-amd64.iso.2 100%[==================================================================>] 1.79G 57.0MB/s in 32s

2018-06-08 18:03:56 (56.8 MB/s) - 'ubuntu-18.04-desktop-amd64.iso.2' saved [1921843200/1921843200]

Install Common Ubuntu Packages

I installed common Ubuntu packages.

apt-get install zip htop ifstat iftop bmon tcptrack ethstatus speedometer iozone3 bonnie++ sysbench siege tree tree unzip jq jq ncdu pydf ntp rcconf ufw iperf nmap iozone3

Timezone

I checked the server’s time (I thought this was auto set before I deployed)?

$hwclock --show
2018-06-06 23:52:53.639378+0000

I reset the time to Australia/Sydney.

dpkg-reconfigure tzdata
Current default time zone: 'Australia/Sydney'
Local time is now: Thu Jun 7 06:53:20 AEST 2018.
Universal Time is now: Wed Jun 6 20:53:20 UTC 2018.

Now the timezone is set 🙂

Shell History

I increased the shell history.

HISTSIZEH =10000
HISTCONTROL=ignoredups

SSH Login

I created a ~/.ssh/authorized_keys file and added my SSH public key to allow password-less logins.

mkdir ~/.ssh
sudo nano ~/.ssh/authorized_keys

I added my pubic ssh key, then exited the ssh session and logged back in. I can now log in without a password.

Install NGINX

apt-get install nginx

nginx/1.14.0 is now installed.

A quick GT Metrix test.

This image shows awesome static nginx performance ratings of of 99%

Install MySQL

Run these commands to install and secure MySQL.

apt install mysql-server
mysql_secure_installation

Securing the MySQL server deployment.
> Would you like to setup VALIDATE PASSWORD plugin?: n
> New password: **********************************************
> Re-enter new password: **********************************************
> Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
> Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
> Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
> Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
> Success.

I disabled the validate password plugin because I hate it.

MySQL Ver 14.14 Distrib 5.7.22 is now installed.

Set MySQL root login password type

Set MySQL root user to authenticate via “mysql_native_password”. Run the “mysql” command.

mysql
SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | | auth_socket | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+----------

Now let’s set the root password authentication method to “mysql_native_password”

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '*****************************************';
Query OK, 0 rows affected (0.00 sec)

Check authentication method.

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user | authentication_string | plugin | host |
+------------------+-------------------------------------------+-----------------------+-----------+
| root | ######################################### | mysql_native_password | localhost |
| mysql.session | hiddden | mysql_native_password | localhost |
| mysql.sys | hiddden | mysql_native_password | localhost |
| debian-sys-maint | hiddden | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+

Now we need to flush permissions.

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Done.

Install PHP

Install PHP 7.2

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php7.2
php -v

PHP 7.2.5, Zend Engine v3.2.0 with Zend OPcache v7.2.5-1 is now installed. Do update PHP frequently.

I made the following changes in /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Install PHP FPM

apt-get install php7.2-fpm

Configure PHP FPM config.

Edit /etc/php/7.2/fpm/php.ini

> cgi.fix_pathinfo=0
> max_input_vars = 1000
> memory_limit = 1024M
> max_file_uploads = 20M
> post_max_size = 20M

Reload php sudo service.

php7.2-fpm restart service php7.2-fpm status

Install PHP Modules

sudo apt-get install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

Configuring NGINX

If you are not comfortable editing NGINX config files read here, here and here.

I made a new “www root” folder, set permissions and created a default html file.

mkdir /www-root
chown -R www-data:www-data /www-root
echo "Hello World" >> /www-root/index.html

I edited the “root” key in “/etc/nginx/sites-enabled/default” file and set the root a new location (e.g., “/www-root”)

I added these performance tweaks to /etc/nginx/nginx.conf

> worker_cpu_affinity auto;
> worker_rlimit_nofile 100000

I add the following lines to “http {” section in /etc/nginx/nginx.conf

client_max_body_size 10M;

gzip on;
gzip_disable "msie6";
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_types
application/atom+xml
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
font/opentype
image/bmp
image/x-icon
text/cache-manifest
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
#text/html is always compressed by gzip module

gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss te$

Check NGINX Status

service nginx status
* nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 21:16:28 AEST; 30min ago
Docs: man:nginx(8)
Main PID: # (nginx)
Tasks: 2 (limit: 2322)
CGroup: /system.slice/nginx.service
|- # nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
`- # nginx: worker process

Install Open SSL that supports TLS 1.3

This is a work in progress. The steps work just fine for me on Ubuntu 16.04. but not Ubuntu 18.04.?

Installing Adminer MySQL GUI

I will use the PHP based Adminer MySQL GUI to export and import my blog from one server to another. All I needed to do is install it on both servers (simple 1 file download)

cd /utils
wget -o adminer.php https://github.com/vrana/adminer/releases/download/v4.6.2/adminer-4.6.2-mysql-en.php

Use Adminer to Export My Blog (on Vultr)

On the original server open Adminer (http) and..

  1. Login with the MySQL root account
  2. Open your database
  3. Choose “Save” as the output
  4. Click on Export

This image shows the export of the wordpress adminer page

Save the “.sql” file.

I used Adminer on the UpCloud server to Import My Blog

FYI: Depending on the size of your database backup you may need to temporarily increase your upload and post sizes limits in PHP and NGINX before you can import your database.

Edit /etc/php/7.2/fpm/php.ini
> max_file_uploads = 100M
> post_max_size =100M

And Edit: /etc/nginx/nginx.conf
> client_max_body_size 100M;

Don’t forget to reload NGINX config and restart NGINX and PHP. Take note of the maximum allowed file size in the screenshot below. I temporarily increased my upload limits to 100MB in order to restore my 87MB blog.

Now I could open Adminer on my UpCloud server.

  1. Create a new database
  2. Click on the database and click Import
  3. Choose the SQL file
  4. Click Execute to import it

Import MuSQL backup with Adminer

Don’t forget to create a user and assign permissions (as required – check your wp-config.php file).

Import MySQL Database

Tip: Don’t forget to lower the maximum upload file size and max post size after you import your database,

Cloudflare DNS

I use Cloudflare to manage DNS, so I need to tell it about my new server.

You can get your server’s IP details from the UpCloud dashboard.

Find IP

At Cloudflare update your DNS details to point to the server’s new IPv4 (“A Record”) and IPv6 (“AAAA Record”).

Cloudflare DNS

Domain Error

I waited an hour and my website was suddenly unavailable.  At first, I thought this was Cloudflare forcing the redirection of my domain to HTTP (that was not yet set up).

DNS Not Replicated Yet

I chatted with UpCloud chat on their webpage and they kindly assisted me to diagnose all the common issues like DNS values, DNS replication, Cloudflare settings and the error was pinpointed to my NGINX installation.  All NGINX config settings were ok from what we could see?  I uninstalled NGINX and reinstalled it (and that fixed it). Thanks UpCloud Support 🙂

Reinstalled NGINX

sudo apt-get purge nginx nginx-common

I reinstalled NGINX and reconfigured /etc/nginx/nginx.conf (I downloaded my SSL cert from my old server just in case).

Here is my /etc/nginx/nginx.conf file.

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/www-nginxcriterror.log crit;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        client_max_body_size 10M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        server_tokens off;

        server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/www-access.log;
        error_log /var/log/nginx/www-error.log;

        gzip on;

        gzip_vary on;
        gzip_disable "msie6";
        gzip_min_length 256;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-available/default file (fyi, I have not fully re-setup TLS 1.3 yet so I commented out the settings)

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-root;

        # Listen Ports
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name www.fearby.com fearby.com localhost;

        # HTTPS Cert
        ssl_certificate /etc/nginx/ssl-cert-path/fearby.crt;
        ssl_certificate_key /etc/nginx/ssl-cert-path/fearby.key;
        ssl_dhparam /etc/nginx/ssl-cert-path/dhparams4096.pem;

        # HTTPS Ciphers
        
        # TLS 1.2
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

        # TLS 1.3			#todo
        # ssl_ciphers 
        # ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DES-CBC3-SHA;
        # ssl_ecdh_curve secp384r1;

        # Force HTTPS
        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        # HTTPS Settings
        server_tokens off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 30m;
        ssl_session_tickets off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
	#ssl_stapling on; 						# Requires nginx >= 1.3.7

        # Cloudflare DNS
        resolver 1.1.1.1 1.0.0.1 valid=60s;
        resolver_timeout 1m;

        # PHP Memory 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
            try_files $uri =404;
            # include snippets/fastcgi-php.conf;

            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php7.2-fpm.sock;

            # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
            # fastcgi_pass 127.0.0.1:9000;
	    }

        location / {
            # try_files $uri $uri/ =404;
            try_files $uri $uri/ /index.php?q=$uri&$args;
            index index.php index.html index.htm;
            proxy_set_header Proxy "";
        }

        # Deny Rules
        location ~ /.ht {
                deny all;
        }
        location ~ ^/.user.ini {
            deny all;
        }
        location ~ (.ini) {
            return 403;
        }

        # Headers
        location ~* .(?:ico|css|js|gif|jpe?g|png|js)$ {
            expires 30d;
            add_header Pragma public;
            add_header Cache-Control "public";
        }

}

SSL Labs SSL Certificate Check

All good thanks to the config above.

SSL Labs

Install WP-CLI

I don’t like setting up FTP to auto-update WordPress plugins. I use the WP-CLI tool to manage WordPress installations by the command line. Read my blog here on using WP-CLI.

Download WP-CLI

mkdir /utils
cd /utils
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Move WP-CLI to the bin folder as “wp”

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Test wp

wp --info
OS: Linux 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
Shell: /bin/bash
PHP binary: /usr/bin/php7.2
PHP version: 7.2.5-1+ubuntu18.04.1+deb.sury.org+1
php.ini used: /etc/php/7.2/cli/php.ini
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /www-root
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.1

Update WordPress Plugins

Now I can run “wp plugin update” to update all WordPress plugins

wp plugin update
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/plugin/wordfence.7.1.7.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wp-meta-seo.3.7.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Downloading update from https://downloads.wordpress.org/plugin/wordpress-seo.7.6.1.zip...
Unpacking the update...
Installing the latest version...
Removing the old version of the plugin...
Plugin updated successfully.
Disabling Maintenance mode...
Success: Updated 3 of 3 plugins.
+---------------+-------------+-------------+---------+
| name | old_version | new_version | status |
+---------------+-------------+-------------+---------+
| wordfence | 7.1.6 | 7.1.7 | Updated |
| wp-meta-seo | 3.7.0 | 3.7.1 | Updated |
| wordpress-seo | 7.5.3 | 7.6.1 | Updated |
+---------------+-------------+-------------+---------+

Update WordPress Core

WordPress core file can be updated with “wp core update“

wp core update
Success: WordPress is up to date.

Troubleshooting: Use the flag “–allow-root “if wp needs higher access (unsafe action though).

Install PHP Child Workers

I edited the following file to setup PHP child workers /etc/php/7.2/fpm/pool.d/www.conf

Changes

> pm = dynamic
> pm.max_children = 40
> pm.start_servers = 15
> pm.min_spare_servers = 5
> pm.max_spare_servers = 15
> pm.process_idle_timeout = 30s;
> pm.max_requests = 500;
> php_admin_value[error_log] = /var/log/www-fpm-php.www.log
> php_admin_value[memory_limit] = 512M

Restart PHP

sudo service php7.2-fpm restart

Test NGINX config, reload NGINX config and restart NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Output (14 workers are ready)

Check PHP Child Worker Status

sudo service php7.2-fpm status
* php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 19:32:47 AEST; 20s ago
Docs: man:php-fpm7.2(8)
Main PID: # (php-fpm7.2)
Status: "Processes active: 0, idle: 15, Requests: 2, slow: 0, Traffic: 0.1req/sec"
Tasks: 16 (limit: 2322)
CGroup: /system.slice/php7.2-fpm.service
|- # php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
|- # php-fpm: pool www
- # php-fpm: pool www

Memory Tweak (set at your own risk)

sudo nano /etc/sysctl.conf

vm.swappiness = 1

Setting swappiness to a value of 1 all but disables the swap file and tells the Operating System to aggressively use ram, a value of 10 is safer. Only set this if you have enough memory available (and free).

Possible swappiness settings:

> vm.swappiness = 0 Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition when free memory will be below vm.min_free_kbytes limit, but in later versions, this is achieved by setting to 1.[2]> vm.swappiness = 1 Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
> vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system.[3]
> vm.swappiness = 60 The default value.
> vm.swappiness = 100 The kernel will swap aggressively.

The “htop” tool is a handy memory monitoring tool to “top”

Also, you can use good old “watch” command to show near-live memory usage (auto-refreshes every 2 seconds)

watch -n 2 free -m

Script to auto-clear the memory/cache

As a habit, I am setting up a cronjob to check when free memory falls below 100MB, then the cache is automatically cleared (freeing memory).

Script Contents: clearcache.sh

#!/bin/bash

# Script help inspired by https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
ram_use=$(free -m)
IFS=

I set the cronjob to run every 15 mins, I added this to my cronjob.

SHELL=/bin/bash
*/15  *  *  *  *  root /bin/bash /scripts/clearcache.sh >> /scripts/clearcache.log

Sample log output

2018-06-10 01:13:22 RAM OK (Total: 1993 MB, Used: 981 MB, Free: 387 MB)
2018-06-10 01:15:01 RAM OK (Total: 1993 MB, Used: 974 MB, Free: 394 MB)
2018-06-10 01:20:01 RAM OK (Total: 1993 MB, Used: 955 MB, Free: 412 MB)
2018-06-10 01:25:01 RAM OK (Total: 1993 MB, Used: 1002 MB, Free: 363 MB)
2018-06-10 01:30:01 RAM OK (Total: 1993 MB, Used: 970 MB, Free: 394 MB)
2018-06-10 01:35:01 RAM OK (Total: 1993 MB, Used: 963 MB, Free: 400 MB)
2018-06-10 01:40:01 RAM OK (Total: 1993 MB, Used: 976 MB, Free: 387 MB)
2018-06-10 01:45:01 RAM OK (Total: 1993 MB, Used: 985 MB, Free: 377 MB)
2018-06-10 01:50:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 379 MB)
2018-06-10 01:55:01 RAM OK (Total: 1993 MB, Used: 979 MB, Free: 382 MB)
2018-06-10 02:00:01 RAM OK (Total: 1993 MB, Used: 980 MB, Free: 380 MB)
2018-06-10 02:05:01 RAM OK (Total: 1993 MB, Used: 971 MB, Free: 389 MB)
2018-06-10 02:10:01 RAM OK (Total: 1993 MB, Used: 983 MB, Free: 376 MB)
2018-06-10 02:15:01 RAM OK (Total: 1993 MB, Used: 967 MB, Free: 392 MB)

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

sudo cat /proc/sys/vm/swappiness
1

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may lose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

tune2fs -o journal_data_writeback /dev/vda1

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

sudo nano /etc/fstab

I added “writeback,noatime,nodiratime” flags to my disk options.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options> <dump>  <pass>
# / was on /dev/vda1 during installation
#                <device>                 <dir>           <fs>    <options>                                             <dump>  <fsck>
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro,data=writeback,noatime,nodiratime   0       1

Updating Ubuntu Packages

Show updatable packages.

apt-get -s dist-upgrade | grep "^Inst"

Update Packages.

sudo apt-get update && sudo apt-get upgrade

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

sudo apt-get install unattended-upgrades

Enable Unattended Upgrades.

sudo dpkg-reconfigure --priority=low unattended-upgrades

Now I configure what packages not to auto-update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

Unattended-Upgrade::Package-Blacklist {
"nginx";
"nginx-common";
"nginx-core";
"php7.2";
"php7.2-fpm";
"mysql-server";
"mysql-server-5.7";
"mysql-server-core-5.7";
"libssl1.0.0";
"libssl1.1";
};

FYI: You can find installed packages by running this command:

apt list --installed

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

unattended-upgrade -d

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows creating democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepaid model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user-defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

Disable dasjhicons.min.css (for unauthenticated WordPress users).

Find functions.php in the www root

sudo find . -print |grep  functions.php

Edit functions.php

sudo nano ./wp-includes/functions.php

Add the following

// Remove dashicons in frontend for unauthenticated users
add_action( 'wp_enqueue_scripts', 'bs_dequeue_dashicons' );
function bs_dequeue_dashicons() {
    if ( ! is_user_logged_in() ) {
        wp_deregister_style( 'dashicons' );
    }
}

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers

server {
        root /www;

        ...
        listen 80 default_server http2;
        listen [::]:80 default_server http2;
        listen 443 ssl default_server http2;
        listen [::]:443 ssl default_server http2;
        ...

I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

location = /http2/push_demo.html {
        http2_push /http2/pushed.css;
        http2_push /http2/pushedimage1.jpg;
        http2_push /http2/pushedimage2.jpg;
        http2_push /http2/pushedimage3.jpg;
}

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

location / {
        ...
        http2_push /https://fearby.com/wp-includes/js/jquery/jquery.js;
        http2_push /wp-content/themes/news-pro/images/favicon.ico;
        ...
}

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

WordPress wp-config.php tweaks

# Memory
define('WP_MEMORY_LIMIT','1024M');
define('WP_MAX_MEMORY_LIMIT','1024M');
set_time_limit (60);

# Security
define( 'FORCE_SSL_ADMIN', true);

# Disable Updates
define( 'WP_AUTO_UPDATE_CORE', false );
define( 'AUTOMATIC_UPDATER_DISABLED', true );

# ewww.io
define( 'WP_AUTO_UPDATE_CORE', false );

Add 2FA Authentication to server logins.

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress too.

Tweaks Todo

  • Compress placeholder BJ Lazy Load Image (plugin is broken)
  • Solve 2x Google Analytics tracker redirects (done, switched to Matomo)

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

Results

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

n' read -rd '' -a ram_use_arr <<< "$ram_use" ram_use="${ram_use_arr[1]}" ram_use=$(echo "$ram_use" | tr -s " ") IFS=' ' read -ra ram_use_arr <<< "$ram_use" ram_total="${ram_use_arr[1]}" ram_used="${ram_use_arr[2]}" ram_free="${ram_use_arr[3]}" d=`date '+%Y-%m-%d %H:%M:%S'` if ! [[ "$ram_free" =~ ^[0-9]+$ ]]; then echo "Sorry ram_free is not an integer" else if [ "$ram_free" -lt "100" ]; then echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB) - Clearing Cache..." sync; echo 1 > /proc/sys/vm/drop_caches sync; echo 2 > /proc/sys/vm/drop_caches #sync; echo 3 > /proc/sys/vm/drop_caches #Not advised in production # Read for more info https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ exit 1 else if [ "$ram_free" -lt "256" ]; then echo "$d RAM ALMOST LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else if [ "$ram_free" -lt "512" ]; then echo "$d RAM OK (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 else echo "$d RAM LOW (Total: $ram_total MB, Used: $ram_used MB, Free: $ram_free MB)" exit 1 fi fi fi fi

I set the cronjob to run every 15 mins, I added this to my cronjob.

 

Sample log output

 

I will check the log (/scripts/clearcache.log) in a few days and view the memory trends.

After 1/2 a day Ubuntu 18.04 is handling memory just fine, no externally triggered cache clears have happened 🙂

Free memory over time

I used https://crontab.guru/every-hour to set the right schedule in crontab.

I rebooted the VM.

Update: I now use Nixstats monitoring

Swap File

FYI: Here is a handy guide on viewing swap file usage here. I’m not using swap files so it is only an aside.

After the system rebooted I checked if the swappiness setting was active.

 

Yes, swappiness is set.

File System Tweaks – Write Back Cache (set at your own risk)

First, check your disk name and file system

 

Take note of your disk name (e.g vda1)

I used TuneFS to enable writing data to the disk before writing to the journal. tunefs is a great tool for setting file system parameters.

Warning (snip from here): “I set the mode to journal_data_writeback. This basically means that data may be written to the disk before the journal. The data consistency guarantees are the same as the ext3 file system. The downside is that if your system crashes before the journal gets written then you may loose new data — the old data may magically reappear.“

Warning this can corrupt your data. More information here.

I ran this command.

 

I edited my fstab to append the “writeback,noatime,nodiratime” flags for my volume after a reboot.

Edit FS Tab:

 

I added “writeback,noatime,nodiratime” flags to my disk options.

 

Updating Ubuntu Packages

Show updatable packages.

 

Update Packages.

 

Unattended Security Updates

Read more on Ubuntu 18.04 Unattended upgrades here, here and here.

Install Unattended Upgrades

 

Enable Unattended Upgrades.

 

Now I configure what packages not to auto update.

Edit /etc/apt/apt.conf.d/50unattended-upgrades

Find “Unattended-Upgrade::Package-Blacklist” and add packages that you don’t want automatically updated, you may want to manually update these (and monitor updates).

I prefer not to auto-update critical system apps (I will do this myself).

 

FYI: You can find installed packages by running this command:

 

Enable automatic updates by editing /etc/apt/apt.conf.d/20auto-upgrades

Edit the number at the end (the number is how many days to wait before updating) of each line.

> APT::Periodic::Update-Package-Lists “1”;
> APT::Periodic::Download-Upgradeable-Packages “1”;
> APT::Periodic::AutocleanInterval “7”;
> APT::Periodic::Unattended-Upgrade “1”;

Set to “0” to disable automatic updates.

The results of unattended-upgrades will be logged to /var/log/unattended-upgrades

Update packages now.

 

Almost done.

I Rebooted

GT Metrix Score

I almost fell off my chair. It’s an amazing feeling hitting refresh in GT Metrix and getting sub-2-second score consistently (and that is with 17 assets loading and 361KB of HTML content)

0.9sec load times

WebPageTest.org Test Score

Nice. I am not sure why the effective use of CDN has an X rating as I have the EWWW CDN and Cloudflare. First Byte time is now a respectable “B”, This was always bad.

Update: I found out the longer you set cache delays in Cloudflare the higher the score.

Web Page Test

GT Metrix has a nice historical breakdown of load times (night and day).

Upcloud Site Speed in GTMetrix

Google Page Speed Insight Desktop Score

I benchmarked with https://developers.google.com/speed/pagespeed/insights/

This will help with future SEO rankings. It is well known that Google is pushing fast servers.

100% Desktop page speed score

Google Chrome 70 Dev Console Audit (Desktop)

100% Chrome Audit Score

This is amazing, I never expected to get this high score.  I know Google like (and are pushing) sub-1-second scores.

My site is loading so well it is time I restored some old features that were too slow on other servers

  • I disabled Lazy loading of images (this was not working on some Android devices)
  • I re-added the News Widget and news images.

GTMetrix and WebpageTest sores are still good (even after adding bloat)

Benchmarks are still good

My WordPress site is not really that small either

Large website

FYI: WordPress Plugins I use.

These are the plugins I use.

  • Autoptimize – Optimises your website, concatenating the CSS and JavaScript code, and compressing it.
  • BJ Lazy Load (Now Disabled) – Lazy image loading makes your site load faster and saves bandwidth.
  • Cloudflare – Cloudflare speeds up and protects your WordPress site.
  • Contact Form 7 – Just another contact form plugin. Simple but flexible.
  • Contact Form 7 Honeypot – Add honeypot anti-spam functionality to the popular Contact Form 7 plugin.
  • Crayon Syntax Highlighter – Supports multiple languages, themes, highlighting from a URL, local file or post text.
  • Democracy Poll – Allows to create democratic polls. Visitors can vote for more than one answer & add their own answers.
  • Display Posts Shortcode – Display a listing of posts using the
    • HomePi – Raspberry PI powered touch screen showing information from house-wide sensors
    • Wemos Mini D1 Pro Pinout Guide
    • Yubico Security Key NFC
    • Moving Oracle Virtual Box Virtual Machines to another disk
    • Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility
    • Diagnosing a Windows 10 PC that will not post
    • Using a 12-year-old dual Xeon server setup as a desktop PC
    • How to create a Private GitHub repository and access via SSH with TortiseGIT
    • Recovering a Dead Nginx, Mysql, PHP WordPress website
    • laptrinhx.com is stealing website content
    shortcode
  • EWWW Image Optimizer – Reduce file sizes for images within WordPress including NextGEN Gallery and GRAND FlAGallery. Uses jpegtran, optipng/pngout, and gifsicle.
  • GDPR Cookie Consent – A simple way to show that your website complies with the EU Cookie Law / GDPR.
  • GTmetrix for WordPress – GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users. Your users will love you for it.
  • TinyMCE Advanced – Enables advanced features and plugins in TinyMCE, the visual editor in WordPress.
  • Wordfence Security – Anti-virus, Firewall and Malware Scan
  • WP Meta SEO – WP Meta SEO is a plugin for WordPress to fill meta for content, images and main SEO info in a single view.
  • WP Performance Score Booster – Speed-up page load times and improve website scores in services like PageSpeed, YSlow, Pingdom and GTmetrix.
  • WP SEO HTML Sitemap – A responsive HTML sitemap that uses all of the settings for your XML sitemap in the WordPress SEO by Yoast Plugin.
  • WP-Optimize – WP-Optimize is WordPress’s #1 most installed optimisation plugin. With it, you can clean up your database easily and safely, without manual queries.
  • WP News and Scrolling Widgets Pro – WP News Pro plugin with six different types of shortcode and seven different types of widgets. Display News posts with various designs.
  • Yoast SEO – The first true all-in-one SEO solution for WordPress, including on-page content analysis, XML sitemaps and much more.
  • YouTube – YouTube Embed and YouTube Gallery WordPress Plugin. Embed a responsive video, YouTube channel, playlist gallery, or live stream

How I use these plugins to speed up my site.

  • I use EWWW Image Optimizer plugin to auto-compress my images and to provide a CDN for media asset deliver (pre-Cloudflare). Learn more about ExactDN and EWWW.io here.
  • I use Autoptimize plugin to optimise HTML/CSS/JS and ensure select assets are on my EWWW CDN. This plugin also removes WordPress Emojis, removed the use of Google Fonts, allows you to define pre-configured domains, Async Javascript-files etc.
  • I use BJ Lazy Load to prevent all images in a post from loading on load (and only as the user scrolls down the page).
  • GTmetrix for WordPress and Cloudflare plugins are for information only?
  • I use WP-Optimize to ensure my database is healthy and to disable comments/trackbacks and pingbacks.

Let’s Test UpCloud’s Disk IO in Chicago

Looks good to me, Read IO is a little bit lower than UpCloud’s Singapore data centre but still, it’s faster than Vultr.  I can’t wait for more data centres to become available around the world.

Why is UpCloud Disk IO so good?

I asked UpCloud on Twitter why the Disk IO was so good.

  • “MaxIOPS is UpCloud’s proprietary block-storage technology. MaxIOPS is physically redundant storage technology where all customer’s data is located in two separate physical devices at all times. UpCloud uses InfiniBand (!) network to connect storage backends to compute nodes, where customers’ cloud servers are running. All disks are enterprise-grade SSD’s. And using separate storage backends, it allows us to live migrate our customers’ cloud servers freely inside our infrastructure between compute nodes – whether it be due to hardware malfunction (compute node) or backend software updates (example CPU vulnerability and immediate patching).“

My Answers to Questions to support

Q1) What’s the difference between backups and snapshots (a Twitter user said Snapshots were a thing)

A1) Backups and snapshots are the same things with our infrastructure.

Q2) What are charges for backup of a 50GB drive?

A2) We charge $0.06 / GB of the disk being captured. But capture the whole disk, not just what was used. So for a 50GB drive, we charge $0.06 * 50 = $3/month. Even if 1GB were only used.

  • Support confirmed that each backup is charged (so 5 times manual backups are charged 5 times). Setting up a daily auto backup schedule for 2 weeks would create 14 billable backup charges.
  • I guess a 25GB server will be $1.50 a month

Q3) What are data charges if I go over my 2TB quota?

A3) Outgoing data charges are $0.056/GB after the pre-configured allowance.

Q4) What happens if my balance hits $0?

A4) You will get notification of low account balance 2 weeks in advance based on your current daily spend. When your balance reaches zero, your servers will be shut down. But they will still be charged for. You can automatically top-up if you want to assign a payment type from your Control Panel. You deposit into your balance when you want. We use a prepay model of payment, so you need to top up before using, not billing you after usage. We give you lots of chances to top-up.

Support Tips

  • One thing to note, when deleting servers (CPU, RAM) instances, you get the option to delete the storages separately via a pop-up window. Choose to delete permanently to delete the disk, to save credit. Any disk storage lying around even unattached to servers will be billed.
  • Charges are in USD.

I think it’s time to delete my domain from Vultr in Sydney.

Deleted my Vultr domain

I deleted my Vultr domain.

Delete Vultr Server

Done.

Check out my new guide on Nixstats for awesome monitoring

What I would like

  1. Ability to name individual manual backups (tag with why I backed up).
  2. Ability to push user defined data from my VM to the dashboard
  3. Cheaper scheduled backups
  4. Sydney data centres (one day)

Update: Post UpCloud Launch Tweaks (Awesome)

I had a look at https://www.webpagetest.org/ results to see where else I can optimise webpage delivery.

Optimisation Options

HTTP2 Push

  • Introducing HTTP/2 Server Push with NGINX 1.13.9 | NGINX
  • How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 | DigitalOcean

I added http2 to my listening servers I tested a http2 push page by defining this in /etc/nginx/sites-available/default 

Once I tested that push (demo here) was working I then defined two files to push that were being sent from my server

2FA Authentication at login

I recently checked out YubiCo YubiKeys and I have secured my Linux servers with 2FA prompts at login. Read the guide here. I secured my WordPress aswel.

Performance

I used the WordPress Plugin Autoptimize to remove Google font usage (this removed a number of files being loaded when my page loads).

I used the WordPress Plugin WP-Optimize plugin into to remove comments and disable pingbacks and trackbacks.

Results

Conclusion

I love UpCloud’s fast servers, give them a go (use my link and get $25 free credit).

I love Cloudflare for providing a fast CDN.

I love ewww.io’s automatic Image Compression and Resizing plugin that automatically handles image optimisations and pre Cloudflare/first hit CDN caching.

Read my post about server monitoring with Nixstats here.

Let the results speak for themselves (sub <1 second load times).

More Reading on UpCloud

https://www.upcloud.com/documentation/faq/

UpCloud Server Status

http://status.upcloud.com

I hope this guide helps someone.

Free Credit

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v2.2 Converting to Blocks

v2.1 Newer GTMetrix scores

v2.0 New UpCloud UI Update and links to new guides.

v1.9 Spelling and grammar

v1.8 Trial mode gotcha (deposit money ASAP)

v1.7 Added RSA Private key info

v1.7 – Added new firewall rules info.

v1.6 – Added more bloat to the site, still good.

v1.5 Improving Accessibility

v1.4 Added Firewall Price

v1.3 Added wp-config and plugin usage descriptions.

v1.2 Added GTMetrix historical chart.

v1.1 Fixed free typos and added final conclusion images.

v1.0 Added final results

v0.9 added more tweaks (http2 push, removing unwanted files etc)

v0.81 Draft  – Added memory usage chart and added MaxIOPS info from UpCloud.

v0.8 Draft post.

Filed Under: CDN, Cloud, Cloudflare, Cost, CPanel, Digital Ocean, DNS, Domain, ExactDN, Firewall, Hosting, HTTPS, MySQL, MySQLGUI, NGINX, Performance, PHP, php72, Scalability, TLS, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: draft, GTetrix, host, IOPS, Load Time, maxIOPS, MySQL, nginx, Page Speed Insights, Performance, php, SSD, ubuntu, UpCloud, vm

How to back up an iPhone (including photos and videos) multiple ways

June 2, 2019 by Simon

This guide is for and anyone who needs to back up their iPhone (complete device along with separate backups of photos and videos).

This is not a paid promo, I don’t get a kickback for mentioning the awesome app below.

Check our my other related posts

  • Backing up files to a Backblaze B2 Cloud Bucket with Duplicati
  • Backing up your computer automatically with BackBlaze software (no data limit)

At the time of writing iTunes v12.9.5.7 was the latest version of iTunes.

iTunes is Apple official software for getting files too and from an iPhone, iPod or iPad. If rumours are right Apple will kill off iTunes software at the end of 2019. If Apple kills off the iTunes software I will update this guide in the future.

Disclaimer: I provide no warranty or support for this app, this is what happened to work for me.

Why write a guide on Backing up a mobile phone?

  • It’s not that simple an I get asked how to do this almost every week by someone.
  • No one thinks to backup up their photos until they fear their phone is lost or stolen.
  • I am about to publish a few posts on backing up (development machines, servers etc) with automated software and with free open source software so this post will be handy to link in what data you can back up in that post.
  • The last time I wrote an article on backups was in March 2016 and its not that great these days.

Golden rules of backing up.

  1. Backup to Three different locations.
  2. Two of the media need to be different.
  3. One of the locations needs to be offsite
  4. Test your backups (a backup is only good if you can restore). (from @Daniel15)
  5. If the backup is encrypted make sure you have the decryption keys. (from @Daniel15)

Backing up the official way (with iTunes)

Apple prefers you just pay them to extend your free 5GB iCloud storage and not worry about backups and not to worry yourself with manual steps.

My wife’s iPhone is always popping up messages saying that her iCloud storage is nearly full and she should upgrade the free space limit.

Screenshot of iCloud setting on iPhone saying your iCloud is nearly full but you have used 5GB or 5GB?

I know she has 100GB+ images and movies on her iPhone but iCloud has only backed up 4GB of photos and videos. Do not trust any backup statuses unless you can verify all files from a restore.

For the record I use an Android phone (Android backup guide coming soon).

iCloud is a good idea for automatic backup but I prefer to see my photos and backup them up myself. Also, Apple iCloud is not free from troubles. If you want Apple to handle backups then I do suggest you upgrade your iCloud storage from the free 5GB to a more sensible 200GB or more.

Screenshot of https://support.apple.com/en-au/HT201318

Here in Australia Apple charge the following for extra storage.

  • 50GB = $1.49 a month
  • 200GB = $4.49 a month
  • 2TB (2,0000GB) = $14.99 a month

A rough estimate: 50GB is enough to store about 12,000 files (9,000 photos + 3,000 videos) *

  • * = based on stats from the backup below (depending on the size an length of your videos).
Screenshot of apple iCloud pricing 50GB = $1.49 a month
200GB = $4.49 a month
2TB (2,0000GB) = $14.99 a month

You can upgrade your paid Apple iCloud backup limit from your iCloud Storage Settings from your iPhone’s settings screen.

Screenshot of iPhone iCloud upgrade button.

IMHO: Apple does make it clear what you are getting before you purchase (but they do not push it enough and people assume their data is safe).

iPhone picture of 50GB = $1.49 a month
200GB = $4.49 a month
2TB (2,0000GB) = $14.99 a month

TIP: Please review your existing iPhone data usage and iCloud usage before it’s too late

I don’t want ot pay Apple for more iCloud storage

I like you, I am tight too but I do pay Google $2.49 a month for 100GB backup storage on my Android phone. Google Drive (Google storage is a bit cheaper than Apple).

Screenshot of Google Drive https://www.google.com/drive/

It is nice knowing your phone is auto backed up.

Install iTunes on Windows (10)

On your Windows 10 computer click the start button. If you don’t have a Microsoft Account go here and create one. Click the start button and type “itunes” (you don’t need to type into a box, just start typing when you click the start button)’.

Then click “iTunes, install app”

Screenshot of a Windows start menu and Get iTunes icon

If you are logged into your Windows 10 store with your Microsoft ID click “Get” (if not you will need to login to the store).

Screenshot of me logged into the Windows Microsoft Store.

After you click ‘Get’, iTunes will start downloading.

Screenshot of the Windows Store downloading iTunes

When iTunes is downloaded Windows will install it. When it’s installed click ‘Launch’

iTunes is installed, Click launch screenshot

You will need to agree to Apple’s terms of service. Click ‘Agree’

Apple terms of service screenshot.

When iTunes opens click “Agree”

Screenshot of Apple iTunes asking if we can agree to share analytics data

Now login to iTunes with your Apple ID (if you don’t have one create one here)’.

Screenshot of iTunes open with an arrow pointing to the Account menu

Click the ‘Account’ then ‘Sign In’ menu.

Screenshot of the account then sign in menu

Login to iTunes with your Apple ID

Screenshot of the apple sign in box.

Optional: If you have Two Factor Authorisation (2FA) of Apple ID’s turned on (you should) you will need to enter a 6 digit code.

Screenshot of an apple 6 digit 2FA code

Apple Two Factor Authorisation (2FA) will send one of your other devices a login code that you will need to use to login (confirm you own the account)

Screenshot of apples 2FA web page at https://support.apple.com/en-au/HT204915

TIP: Check out https://twofactorauth.org/ to see what other sites use Two Factor Authorisation (2FA). I use hardware Yubico YubiKeys to protect logins to WordPress, Linux and websites.

Now back to the article, iTunes should be ready to allow us to backup our iPhone.

Screenshot of me logged into iTunes

Before I continue I will click ‘Edit‘ then ‘Preferences‘ menu to view where iTunes will download media too (different ot backup data), not important but I just want ot see if it is not pointed ot my smaller C drive before I backup my phone

I changed the location to a larger S:\Drive’

Screenshot of Apple setting screen (tab Advanced)

I was greeted with a message asking me to confirm that I wanted to sync the iPhone that I just plugged in called ‘EllieRose’, I clicked ‘Continue’

Confirm access to the iPhone screenshot

On the iPhone I also clicked ‘Trust’ to allow iTunes to talk to it.

iPhone screenshot to allow iTunes to talk to it.

I was then prompted to download a software update to the iPhone, It appears this phone is not running the latest software

Screenshot of a question to update to iOS v 12.3.1

I was prompted to sync the iPhone to the computer

screenshot of a prompt to sync purchases from the iPhone to the computer

A backup of the phone was underway.

Screenshot of iTunes backup progress with a progress bar

If the screen above does not appear click the icon below to view the backup and restore iPhone menu.

Screenshot showing the iPhone Device button in iTunes

The iPhone was backing up to this folder on my computer:

C:\Users\Simon Fearby\Apple\MobileSync\Backup

I was not prompted for a location to back up so and I will move this backup folder after the backup completes (so my C: drive does not fill up).

My drives.

  • C:\ = 500GB drive (faster SSD)
  • S:\ = 2,000GB Drive (slower Magnetic)
Screenshot showing  iTunes backup and  the C:\Users\Simon Fearby\Apple\MobileSync\Backup folder

The iTunes iPhone backup is now complete (it took about 2 hours).

Screenshot of the complete backup

Now that the iPhone was backed up iTunes the Operating System update started to run.

Screenshot of iTunes verifying the iPhone update

Update verified.

Screenshot of iTunes updating the iPhone OS.

During the update the iPhone was unavailable.

Screenshot of iPhone installing firmware

Done, the iPhone had updated it’s operating system in about 30 minutes.

Screenshot of iTunes showing the newer OS version.

Now lets see how much space I have available on my C Drive.

Screenshot of disk usage on my c and S drive.

It looks like the iPhone backup iTunes made was 60GB in size.

Screenshot of Windows reporting the backup was 60GB in size.

Using Windows Explorer I moved the backup from..

C:\Users\Simon Fearby\Apple\MobileSync\Backup

to..

S:\Backup\AlisonsiPhone\iTunes\31May2019

(right click drag an drop action in explorer)

Screenshot of right click drag and drop move folder.

It took me about 20 minutes to move the 60GB folder.

Screenshot of thew Windows copy dialog progress.

TIP: You can (should) also copy this backup folder to..

  • A removable hard drive
  • Removable USB Flash Drive
  • A NAS or SAN Drive
  • etc

Now this backup is available for me to restore in the future if I need it.

In the case of a restore, I just need to move the backup into the iTunes expected location.

Copy from..

S:\Backup\AlisonsiPhone\iTunes\31May2019

..to..

C:\Users\Simon Fearby\Apple\MobileSync\Backup

The Catch

The catch with back-ups made with iTunes is they are useless if you wanted to restore individual files (say to find a photograph or video). iTunes backups are usually tens of thousands of files with random folders and filenames

Screenshot of iTunes nasty backup, i obfuscated all files to random guid filenames

If you want to JUST backup photos and videos read on.

How to backup just photos and videos from your iPhone.

This part of the guide needs a paid version of the iOS App Photo Transfer App. In Australia, the app is free but to turn it into a full version needed to restore everything (via in-app purchase) it cost $10.99.

This app is well worth $10.99 AUD (it may be cheaper in other countries) to have it push photos an videos from your iPhone to a free companion Windows program (a Mac versions exist too).

Screenshot of http://phototransferapp.com/

Buy the iOS version and install it on your iPhone.

Then download the free Windows version: http://phototransferapp.com/win/

Extract the files from the zip file.

I extracted PhotoTransferapp.exe from PhotoTransferapp.zip

When I run PhotoTransferApp.exe I get an error saying I need to install Adobe AIR run times.

Screenshot of me installing Adobe Air run times from https://get.adobe.com/air/

Go to https://get.adobe.com/air/ and download and install the Air run time.

Screenshot of https://get.adobe.com/air/

Install the Adobe Air run time.

Install Adobe Air From https://get.adobe.com/air/

I installed Adobe AIR and again reopened the PhotoTansferApp.exe and was prompted to allow access to the Windows Firewall.

I was on a home network so I enabled firewall access.

Screenshot of Photo Transfer App asking fore firewall access.

The Photo Transfer App prompted me to ‘Discover Devices‘ and to make sure the Transfer app is running on the iPhone.

The Photo Transfer App prompted me to 'Discover Devices' and to make sure the Transfer app is running on the iPhone.

Before I clicked ‘Discover Devices” on the Photo Transfer App on Windows I opened the Transfer app (mentioned above) on the iPhone.

Photo of the Transfer app on the iPhone

On the iPhone I clicked ‘Send‘

Click Send

The transfer app will ask for access to your photos, you will need to press OK.

Screenshot of the transfer app asking for permissions to photos

On the iPhone again I clicked ”Windows‘

On the iPhone again I clicked  ''Windows'

The iOS Transfer app now said I should run the ‘Photo Transfer App‘ on Windows.

The iOS Transfer app now said I should run the 'Photo Transfer App' on Windows.

TIP: You can transfer over WiFi (if your iPhone and Windows device is on the same WiFi Network) or you can transfer over a USB cable.

Screenshot: transfer via WiFi or USB?

Before I started the ‘Detect Device‘ or ‘transfer‘, I set the Backup path location to my S: drive by clicking settings in the bottom right and choosing a folder (as my C Drive is a bit small).

Screenshot of Photo Transfer App choosing a folder to backup to.

I noticed the port of 57777, I temporarily disabled the whole Windows Firewall just in case it prevents the photo backup.

In hindsight, it was not a good idea to disable the whole firewall, but because I was at home on a safe network I felt safe to do so. Next time I will not disable the firewall and see if this still works.

If you are on an internet cafe, school or university network do not disable your firewall.

I clicked ”Start” then typed ‘firewall‘ and clicked ‘Windows Defender Firewall‘

Screenshot Windows Defender firewall icon

I turned off my firewall.

Screenshot Firewall disabled.

I clicked ‘Detect Devices‘ and the iPhone ‘EllieRose‘ appeared on the left. I doubled clicked on the iPhone name and was prompted with an Authorization required message.

Screenshot: Authorisation required message.

I looked at the ‘Transfer‘ app on iOS and clicked ‘Yes, always‘ to allow access.

Screenshot: Authorisation yes/no on the iPhone

After 20 seconds I can see photos on my iPhone on Windows.

Screenshot, all of my iPhone images were appearing on Window.

I selected all camera albums to backup and clicked backup. I prompt to upgrade to the full version will appear, you will need to buy the upgrade.

Full backup happens only if you upgrade to a full version via an in app purchase

After the full version if purchased the backup will be allowed.

You can upgrade from the free to paid full version from the main screen of the Transfer app.

Screenshot get full access by upgrading the free app to a paid version

The upgrade In App Purchase is $10.99 AUD

Screenshot of a $10.99 In App Purchase on the iPhone

When you purchase the In App Purchase you can run the backup again.

Screenshot of all camera categories on the iPhone on windows

Now I could see photos being copied from the iPhone to my defined backup folder.

Screenshot underway.

I went to bed as I knew there were about 100GB of files on the iPhone and this was going to take a while.

In the morning the backup was done.

Screenshot backup done.

26,000 files were backed up (over 100GB).

I now had a full iPhone backup made by iTunes and a copy of all photos and videos.

Screenshot showing the 2 backups (a) iTunes made and b) Photo Transfer App made)

I turned on the Firewall again.

Screenshot: I re enabled the firewall.

I now had 160GB of backed up photos, videos and phone backup.

160GB of files  from both backups.

Backup your iPhone backups to the cloud

The steps to do this two ways will be added soon.

I will add a section on how you can back up the iTunes and manually synced photo and video backups to the cloud automatically and a more complex but cheaper was for 0.005c per GB.

Watch this post.

How to backup and Android Phone

Article coming soon.

Other Links

Read the official iTunes/iCloud backup guide from Apple here: https://support.apple.com/en-au/HT203977

Backing up your computer automatically with BackBlaze software (no data limit)

Versions

v1.3 Added back blaze article link

v1.2 Added more images (from a phone that does not have the iOS app already)

v1.1 Added an Android heading.

v1.0 Initial Post

Filed Under: Apple, Backup, Cloud, Google Tagged With: Apple, Backup, iCloud, iPhone

Connecting to a server via SSH with Putty

April 7, 2019 by Simon

This post aims to show how you can connect to a remote VM server using Telnet/SSH Secure shell with a free program called Putty on Windows. This not an advanced guide, I hope you find it useful.

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

You will learn how to connect (via Windows) to a remote computer (Linux) over the Telnet protocol using SSH (Secure Shell). Once you login you can remotely edit web pages, learn to code, install programs or do just about anything.

Common Terms (Glossary)

  • Putty: Putty is a free program that allows you to connect to a server via Telnet. Putty can be downloaded from here.
  • Port: A port is a number given to a virtual lane on the internet (a port is similar to a frequency in radio waves but all ports share the same transport layer frequency on the internet). Older unencrypted webpages work on Port (lane 80), older mail worked on Port 25, encrypted web pages work on Port 443. Telnet (that SSH Secure Shell uses) used Port 22. Read about port numbers here.
  • SSH: SSH is a standard that allows you to securely connect to a server over the telnet protocol. Read more here.
  • Shell: Shell or Unix Shell is the name given to the interactive command line interface to Linux. Read more about the shell here.
  • Telnet: Telnet is a standard on the TCP/IP protocol that allows two-way communication between computers (all communicatin issent as characters and not graphics). Read more on telnet here and read about the TCP protocols here and here.
  • VM: VM stands for Virtual Machine and is a name given to a server you can buy (but it is owned by someone else). Read more here.

Read about other common glossary terms used on the Inetre here:
https://en.wikipedia.org/wiki/Glossary_of_Internet-related_terms

Background

If you want a webpage on the internet (or just a server to learn how to program) it’s easier to rent a VM for a few dollars a month and manage it yourself (with Telnet/SSH Secure Shell) than it is to buy a $5,000 server, place it in a data centre and pay for electricity and drive in every few days and update it. Remote management of VM servers via SSH/Secure Shell is the way for small to medium solutions.

  • A simple web hosting site may cost < $5 a month but is very limited.
  • A self-managed VM costs about $5 a month
  • A website service like Wix, Squarespace, Shopify or WordPress will cost about $30~99 a month.
  • A self-owned server will cost hundreds to thousands upfront.

There are pros and cons to all solutions above (e.g cost, security, scalability, performance, risk) but these are outside this post’s topic. I have deployed VMs on provides like AWS, Digital Ocean, Vultr and UpCloud for years. If you need to buy a VM you can use this link and get $25 free credit.

I used to use the OSX Operating System on Apple computers. I was used to using the VSSH software program to connect to servers deployed on UpCloud (using this method). With the demise of my old Apple Mac book (due to heat) I have moved back to using Windows (I am never using Apple hardware again until they solve the heat issues).

Also, I prefer to use Linux servers in the cloud (over say Windows) because I believe they are cheaper, faster and more secure.

Enough talking lets configure a connection.

Public and Private Keys?

Whenever you want to connect to a remote server via Telnet/SSH Secure Shell you will need a public and private key to encrypt communications between you and the remote server.

The public key is configured on your server (on Linux you add the public key to this file ~/.ssh/authorized_keys).

The private key is used by programs (usually on your local computer) to connect to the remote server.


How to create a Public and Private Key on Linux

I usually run this command on Ubuntu or Debian Linux to generate a public and private SSH key.

sudo ssh-keygen -t rsa -b 4096

The key below was generated for this post and is not used online. Keys are like physical keys, people who have them and know where to use them can use them.

Output:

Generating public/private rsa key pair.
Enter file in which to save the key (/username/.ssh/id_rsa): ./server
Enter passphrase (empty for no passphrase): ********
Enter same passphrase again: ********
Your identification has been saved in ./server.
Your public key has been saved in ./server.pub.
The key fingerprint is:
SHA256:sxfcyn4oHQ1ugAdIEGwetd5YhxB8wsVFxANRaBUpJF4 [email protected]
The key's randomart image is:
+---[RSA 4096]----+
| .oB**[email protected]       |
|  +.==B.+        |
| o .o+o+..       |
|  .. +..o...     |
|    o ..Sooo.    |
|         ++o.    |
|        .o+o     |
|        .oo .    |
|         ...     |
+----[SHA256]-----+

The two files were created

server
server.pub
  • “server” is the private key
  • “server.pub“is the public key

Public/Private Key Contents

Public Key Contents (“server.pub”)

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7Xo9bOCXJ7gVjP8tKOxHVId3KTo5I0VRU/kSRK3+mGd5VbDbQABo3tdWzYhzkjODzRS9TeL2dcLAQNNQKshi9IW5IGDS1NocSCLFQId5BFr9s3E79fkWqcZkKmwocepXOOZ91EDKgIFxviOzZKe99sdxxMoZzi1nxgVyXl4TnaelyiQxeKYniVs1iqDfYWQCxkKsmYit8TvGtOwrhLvKNh9362/y5ebpXVdFlRuB83eF7k8RHNYCQyOJJVx4cw3TIsAN0GMOwjuaOZbp7rR1d6k7RZmaApNRTbaWOXy32UiBST5TV/jXF2UL/4IBnn+yvCrM0v79e/3omgjlVVKfWByFzMv/YlBKCAX3xxtJQ9RkzTqseKupXmmJU0rik6Xuz31N2oyw4M7yJofSUGVCN0pnpKEvnKxqolfD9egdQy2XDaNioY7cvOO1qRegCKE0sDh1m5MzJWMhbDs7macSMyd6+0O5qWc/ByHy0G/mVbd8kO4jIuEzEs4IFkPCToEZp7KfkY7KRkOhccLbQ4ApCesUfBtGGAN1f33NnXCHae3Cx46nSd23fvgDZUVnjI47tNJH5Z8FNVlW/fp5Rgeu/aPUephnDX2IBxwIKQOmSTDY+nxU4V+c93H1gSOJfvqYbVKIAXKyN9Yh6LC44ZvLrL4q0TC0QlH2+kxMLnuj7zw== [email protected]

Private Key Contents (“server”), always keep the private key safe and never publish it.

-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,D34670C40CE3778974BEF97094010597

b4oecyqLsWt9n+G12ldVNlaQxSKF1wSrlBPg6FGiHRauTCyreUwoI2dMOAkwnGmN
8fcy51fH7D3Kg0G9fWWNPd+oUDwZmrpB8Mv6Ndk4bLYZEbkNOFgvPwNre7edTBOD
JGZRdWqb+yrywgvz3iTXPNjNK5REU3u3JmD69jInFNo92j765QQKA4sFgEyD/8g+
zg8yefIQAhEsVELC5LXPPyuTfA+x0Q+040PqCJ+FCISJI1CeZjLwk7Fbe453Vj81
zaDsurl5X5gaRUlVjB2asr6etWdMLWcalX4Nbyj2A10L3J4ONjKq3Wc2muJ0Q6ES
oNqBaU2iHPlK8yK0TGj/ERfjaG1qdlhBcow0pSapRqGopXBuVBLVuyc2NHe5CCTk
Ezq+LZGsVYmiOIIY4QRJdEN/DVLFHRGK/xA9A7unm484zXIEO6wznE0DuCTtyZs0
luJ3bKLRcack3K1Dphq0LjSG4YxQlkHewa9k9AKpDPTqeeKKckySakiDCGPT6htk
VqaCKrApAt6GQ2hLVXZ0BFVN5A3WUJ5s+HpFvTUzHTNZcdsVS4PgxhuCtnSO/BdS
/G+ODc4aZJNYQD9QQfWUnxkgnQJCWJ+aBZtKF7eDPRYY7qD9jWxubDzrFplBkmAi
O+aX5N8dpU3lEty4INjyh5LpgZW3swjUhEKWi/c1k+Qd1gCWzYzwAq2BfpWcF8Z+
c+y9lQUKbq2yDlxReCIsfb/hda5k1HjgaUlhKbjWIITSlGqf/NE9i+vj0rQEMQXQ
mxBoilfLUPd5A1ttG5XvqC2ex5HBmjzCazZ13Z/2c/PkwicHBmrf5bKYHZp49niV
44n8tZRamCUv6HaJUaKR22MigOG/qGppGPodGeLNj1DFLYAEQ78SYcVhEqIICBo1
t1yaIemUq8MWXSZz1K3cP4FEXQcEziQxFLU/0DCE0P0mIU3MExUmjB/nVE8vxb5l
p3ej3yrRGe+P2neco2gttgaTEi6l/S+0TIiZNstnVPG48BPW71mwVg9XR1d+avO7
OpXt0UgocX0xp7zBgK2up8Ai6v66WwjoNgyvFe02aK4/+fSC+aJ5D6N7JVNxd/bn
Py4W8oLKnrE1PKtIfBw/aE+rgudaMIyuxCaLllRKyDxVPPiJFp2iFcH/Y+k+0vDa
xE9Jpdd0zOWkZyebAxrS8zAUUNNaTQ+rWkj/zORjE4ptHpdwdazzHoQwIs+1kjsv
e/+JEmoskH7XozLnxClVhhWMXWfgQsPWBqPnGzieW0tv9SeIAU/BLJCHJRhBMAT1
ugBtcda1VMlAPVroYtVyUdCxkYZqGfIDbKqtOvvuBgUIUe/HnC3ExQQycC9F05BH
RJibaM/11MLTcZSO7KOK65Dg2v3VBhe6rfDl4tTR0yOySPXCacb9aMt2pMPTEe0/
wU49wCefchfD2bsR3kXPpUqm+HbkHORpIwsMZfQO/8dooXYdiYUdzV9roXG6OGVQ
SsV/xR2lE3XrR71TBegfRnQirI8tj4psSor+yCj3qV936Oh31D96Z6P4glshibsG
ffWAO/TSdu5ZV+UVahh6bTozs+g+odUu/S48TeI1fk7lPlqwZdjoSHXUI2v1FAQ2
jSSywuZQxHlGhg6OeI052cxx3zcVyVVLFHhIrfvufNc3c3+KYhtyiSzBNYN1BrJi
xNXwlDS1jYWgRHkf9zbNBU0MLTYHjZZvO9Jpl/UhKKBdIvJFwmGmXS2lgU6slunJ
Ojp4tY1tbI520KOskV/OoqEfmhXh5fTlI3onzoK1aLqxk1d0d65ONcxqVbAG79RN
b0Q5PgewSOgFlcZ7tEIZKAWsWVhjlFTSGRujdZVM1vZB9fCJesemai7HU0e4J+Do
tqvss8I2n6TPxlTYFzQ4w12pIiOzx/8cFLX78NLN8wQFElhhczeuW5HDAnmPxYhQ
eLY0HgDCFSvVAvGXo0j1gcBUcOr/LzZSsJhxsB7FKyrUjlmD/7Y45WoKJj41bKL+
y4+iDhXyLBiqVClRijsguwiCkmPFiR7Bng2pglS0oIWPWu1UbTJWVJPfuUTOBC+M
4/2fBtgFjUz8iUISs9ncEKkERlxodBIu+ekgLJZAigSMvUKfGE1YB1AA9x96VLjd
VJSjjWvnhMEoSwNzlNQ9+dhoD5Cg9zicgIIKnHnovYGOu8g9ZWfvhJFrKZgkfLRv
r2KgkWiHWpf0swiyGUOlGJDe39nMMkoxib7XE/J3VI3na1ZUOIf8kl9kdHXJ0R3C
2IjdbfiFHEDOrakp5oeVf8BbLK7RB8OlxgJAS47Byh8j97U7f13A5ZYlK3bkZ7E4
h7mCJQozgWP81ut0d9WUlcKp5M8yg2ctZ7h4oeG4Js4ceHqd19Z4P+1xWKwXcdmV
+uhiTftevTu3/UhYQVV4ck98C9pursJJYL5hTnIIpTSWIR+jSahhtzUy/upjugPp
cKi6eGlOkcHdKNRtiu7/IZqni85fC8PAwPZ93SICdiq6BpGaGWFh046weIJuflSK
Pd76+M70YRd+pkaRjJyFJ3hLyg7W5mlOb1+yBIlXKzpbch9B5E4dRHCcOsg4+v/9
exRgAnvUIhR/GpSySDDwgKHg8rAyjjoGeZFH3TJIemAAimyaR608a9tCn7SxVobs
UQlZ9WwC0dQIEv7mSvSige3imbybPtCoBHJAqsJqKCFJEDWbIF5l2VYZcfJUYaEI
oZAJHYGnZm33yQ6eSOusXJ2SnnGZ+ZsGO4bDVSwN20FkSt11gN8Wjrki9CxeVQp7
dWbKX1r/lZw74yUB4cYN23hgLJsdqvM7THzwlBkVtgV74RGY0qv59ecBUSQedlSK
dkOnkmoCiGRSNyf+ebijQaygnfK0ArG5wiRF/RQWiPFj7S6DHRxIOrXqcmvhJ7Ly
NApn9pPYyoZEAbk82MAXkapZ5+YLIKLjdNsYuKq5xVty+mc+FfxLWmZGX+QQinra
Z9DfY9KQw4rxJ/ju4ILnDrygm/QBsNFXBojOuzOIULt7c26s3d/47T+IXA4SIX4v
cPqYa6S3PU/Yoe5/Ya3tFxXmBXgEgVLZuujMs7dyCOAqLEyBEHYqIclp+TElWQLR
V660fczVXeedfd2tNBy1IBj1vhGa9j5mZLbFwTczykwCFfihLIrxSEc1MQA4CaSX
-----END RSA PRIVATE KEY-----

The Public and Private keys is used to encrypt all Telnet/SSH connections and traffic to your server. Keep these key’s private.

fyi: Putty can create SSH Keys too

If you do not have a Linux computer or Linux server to generate keys the Putty generator can create keys too.

Puttygen generating a key based on the randomness of mouse movements.

I did not know Putty can create keys.

Do save the public and private key(s) that were generated in Puttygen (tip: PPK files are what we are after along with the public key later in this post).

Public keys are added to your server when you deploy them. On Linux, you can add new Pulic keys after deployment by adding them to this file “~/.ssh/authorized_keys” to allow people to log in.

Puttygen does format the keys differently than how Ubuntu generates them. Read more here. I’ll keep generating keys in Linux over Puttygen.

Output of the public and PPK files from Puttygen

Putty SSH Client on Windows

Putty is a free windows program that you can use to connect to serves via SSH. Download and install the Putty program.

Open Putty

Putty Icon

Default Putty User Interface.

Screenshot of the Putty Program

To create a connection add an exiting IP address (server name) and SSH port (22) to Putty.

Screenshot of an IP and port entered into putty

In Putty (note the tree view to the left of the image), You can set the auto login name to use to log into the remote server under the Connection the Data in the tree view item

Screenshot showing the SSH usename being added to putty under Connection then Data menu,

You can also set the username under the Connection then Rlogin section of Putty.

Set the usernmae undser rlogin area of putty

OK, lets add the private SSH Key to Putty.

Putty Screehshot showing no support for standard SSH keys (only PPK files)

It looks like Putty only supports PPK private key files not ones generated by Linux. I used to be able to use the private key in the VSSH program on OSX and add the private key to connect to the server over SSH. Putty does not allow you to use Linux generated Private keys directly.

Convert your (Linux generated) private key to (Putty) PPK format with Puttygen

Putty comes with a Key Generator/Converter, you can open your existing RSA private key and convert it (or generate a new one).

TIP: If you generate a key in Puttygen don;t forget to ad’d it to your authorized host file in your remote server.

Open Puttygen

Puttygen icon

Click Conversions than Import Key and choose the private key you generated in Linux

Screenshot showing import RSA key to convert

The private key will be opened

Screenshot of imported RSA key

You can then save the private key as a PPK file.

Save the private key as a PPK file
“server.ppk” Key contents (sample key)
PuTTY-User-Key-File-2: ssh-rsa
Encryption: aes256-cbc
Comment: imported-openssh-key
Public-Lines: 12
AAAAB3NzaC1yc2EAAAADAQABAAACAQC7Xo9bOCXJ7gVjP8tKOxHVId3KTo5I0VRU
/kSRK3+mGd5VbDbQABo3tdWzYhzkjODzRS9TeL2dcLAQNNQKshi9IW5IGDS1NocS
CLFQId5BFr9s3E79fkWqcZkKmwocepXOOZ91EDKgIFxviOzZKe99sdxxMoZzi1nx
gVyXl4TnaelyiQxeKYniVs1iqDfYWQCxkKsmYit8TvGtOwrhLvKNh9362/y5ebpX
VdFlRuB83eF7k8RHNYCQyOJJVx4cwnTIsAN0GMOwjuaOZbp7rR1d6k7RZmaApNRT
baWOXy32UiBST5TV/jXF2UL/4IBnn+yvCrM0v79e/3omgjlVVKfWByFzMv/YlBKC
AX3xxtJQ9RkzTqseKupXmmJU0rik6Xuz31N2oyw4M7yJofSUGVCN0pnpKEvnKxqo
lfD9egdQy2XDaNioY7cvOO1qRegCKE0sDh1m5MzJWMhbDs7macSMyd6+0O5qWc/B
yHy0G/mVbd8kO4jIuEzEs4IFkPCToEZp7KfkY7KRkOhccLbQ4ApCesUfBtGGAN1f
33NnXCHae3Cx46nSd23fvgDZUVnjI47tNJH5Z8FNVlW/fp5Rgeu/aPUephnDX2IB
xwIKQOmSTDY+nxU4V+c93H1gSOJfvqYbVKIAXKyN9Yh6LC44ZvLrL4q0TC0QlH2+
kxMLnuj7zw==
Private-Lines: 28
DkpbM78GgGBSgfs9MsmZwDJj6HFXdoe+fCP1rLnwbE99mvU6Fbs23hXd+FsVdQbb
VR5tKTocV7tEwGjtLCHSTSF6gap0l4ww0Ecuvr/Dra2CJ2BsntyssBrWnlUT7OlA
M9zKQAzywAy4AHkph0YvH4l7BcJ5V1pUltm2JDTU6+iFqXDsstUUEDcQ4u0EalWU
EEsW+quNSwO0HBHvWY6N7tbiuEN9L+cFYIdsJEDfqM4hNi+7Ym+SQq5FOPyA6gXa
vhujsjPQAWI3TFxh7EIvsPDMCXxWHL6qaDvOMmPPTZDbEvm4nQ5Kax9jWacPILn7
ezc7ZAiZdDiFbkF3TLyuHx71mjChZgLoZLWYfXR3MBEEYnkNO/7oSMRUwDzEyWKW
ZgqdUtGg0cR+qWvaxQTDQsN/DjB7jGgnlreF92S8xSsbk5GgpZnTQ1V0cm3oecB+
+JP90K4Fi979gPWnwTfg6ZvmLUiVz3uBbvegkT9CVZhhZXSKq53H+SjTZfKBPrM9
NHGLkYr1WjToGR39LMrh4X3KChGewMFyuxtpkEQV60eCnHZBHgTco2A0yriRprOP
Ks4qJXOtZnsMYMesUDX9W5wLc4HcRvRh2UBPw/8bPz6mNrBk8j5SPIwBrPMIBejd
4IPoYezaEFKPg2bP7dn+Nftz5CGagcV2g+zhE615dsWzX1P7yu/1dTmz9LXaMmN6
d+zJE8TtjeaoW5NE1H3Flj9rknzJW7xQfokhS5hMkOg6J0AA6Pk13tupu8WHMkVB
x7nVu876f8tIbT8GzXCGgSl+zS7IJO3pt9T9QHIYa+T3oTIUqfBfK1WffUZwHMRn
Xn/VKUtIIIPiVfCtQuxSrTiJzQcoJ/yvfv62YAGv2LsDlBoHfXRdf6h3TCCCOVxT
WE45sbj3gJ1Cgjt1SEd/8A3hkstn2U2NKBI9gkB9H5BbDJoAXq6/4CkwaQvSEzs7
LK5btRlWop+7gqkyMPpgxv9li9IEDJ999ufMqxkFgOBkmkR5Si71elXRnwiKrjfU
Ce14iy7Dd7lb7IU9OEBjWFZlSigVEnc8klhGHDuxnojiW1ld7pUDIkAAbdTMOFON
abcpfNwcg5Y3l+1KwIQHuewAUuA9472jV4V9EAn7pJ7wgmYHbzMzg9Z9dM8h/3UI
axBzAW+cJM80gN+nZMbmDC9FkXV16GSuqC2iQUVGb2TIheAS7oCR+JFZFQNv0ytF
rGQ9K1wIGbMI4oDPcAid7DzrEXVl3d2x8MtwF/WzfHehVJD1h1uNwezLf1gBKyas
9GBfDOYwd8zgaL2H99GYD1Ba7TePJY81mx7m10eYdwDj1vCpboKE3cE6AyL7ki+4
Ix7GSzQs9NBckF9+8eVXe2T4Cc450hIoN0BWcxVUUdGCA1skZ1PczPs1z/ae4lxd
l5WmPy8Gyh7cnZpyqzvwAPSFDadkNP60eekfkRHyo4QyLhj7QZtO0kOgWhT3CHma
FjZ5jJu59U/4gc0TpQ8ra3vgKQKudloExsg027+34nR98dN+zzUj4S2C/J34W98C
DEEu/SO7nfW/a2UARXBKWCbS+3j24zHc9dbgX2tZoAoInUvRGiSOsLVsMhDiBoyb
wWoNxrKPR3Fi5zZ+GfDUgUGpZoW/b54KnFouIHBYbI41Gkh4vj6lxOGh/sb3SPHd
Wg6EN/0z/mer3bG0a2/ZHKYA5KGWRXWYvYLz4Je8fb/egBrSU6BztwSNeilzA9lI
J4BO7pzXECnWYutB14UxHw==
Private-MAC: 12298fa865ac574da81898252e83b812200cba59

Now the PPK key can be added to Putty for any server connection that uses the public key. Use the right key for the right server though.

Add the private key to a Putty server by clicking Connection, SSH, AUTH section and browing to the PPK file.

Screenshot showing the PPK key file added to Putty

Now we need to save the connection, click back on the Session note at the top of the treeview, type a server name and click Save

Save Putty connection.

Connecting to your sever via Telnet/SSH wiht Putty.

Once you have added a server name, port, usernames and private key to Putty you can double click the server list item to connect to your server.

You will see a message about accepting the public key from the server. Click Yes. This fingerprint will be the same fingerprint that was shown when you generated the keys (if not maybe someone is hacking in the middle of your local computer and server)

Putty messgae box asking to to remember the public key

Hopefully, you will now have full access to your server with the account you logged in with.

Screenshot of an Ubuntu screen after login

Happy Coding.

Alternatives to self-managed VM’s

I will always run self-managed server (and configure it myself) as its the most economical way to build a fast and secure server in my humble opinion.

I have blogged about alternatives but these solutions always sacrifice something and costs are usually higher and performance can be slower.

I am also lucky enough I can do this as a hobby and its not my day job. when you self manage a VM you will have endless tasks or securing your server and tweaking but its fun.

More Reading

Read some useful Linux commands here and read my past guides here. If you want to buy a domain name click here.

If you are bored and want to learn more about SSH Secure shell read this.

Related Blog Posts

  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Useful Linux Terminal Commands
  • Setup two factor authenticator protection at login (SSH) on Ubuntu or Debian
  • etc

Version: 1.1 Added MobaXterm link

Filed Under: 2FA, Authorization, AWS, Cloud, Digital Ocean, Linux, Putty, Secure Shell, Security, Server, SSH, Ubuntu, UpCloud, VM, Vultr Tagged With: Connecting, Putty, secure, server, Shell, ssh

How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains

March 31, 2019 by Simon

This is a quick guide that will show you how you can connect to a cloud server via SFTP with the PHPStorm IDE from Jet Brians and deploy files from your localhost to the cloud. This is my opinion, I am not paid to promote PHPStorm or UpCloud.

Pre-Requisites/Assumptions

This guide will assume you already have (or know how to)..

  • Buy a domain name and point it’s DNS to a server (I use Namecheap.com for buying domains)
  • Buy and deploy a server in the cloud. I have used AWS, Digital Ocean, Vultr but now use UpCloud for deploying fast self-managed servers. Read this guide here to see how I create a server from scratch on Up Cloud.
  • Setup SSH access to your server and configure a firewall.
  • You have or know how to set up PHP and Web Servers and configure them on your localhost and remote server (guides here, here, here, here, here and here).
  • etc ( check out all my guides here https://fearby.com/all )

I am using Windows 10 Home (with IIS Web Server (document root redirected to S:\Code\) and a pre built Ubuntu Cloud servers.

IIS pointing to S:\Code

Why no FTP? I do not create FTP servers on my serves to increase security and I only access servers via SSH via white-listed IP’s and then authenticate with hardware 2FA keys from YubiCo (read me 2FA guide here and also how to secure *nix servers and WordPress with 2FA).

Background

2 years ago I used to use the Cloud 9 IDE to connect to, and code files on cloud servers and life was good. I could configure and connect to servers, drag and drop files, run bash scripts from a web page and close the Cloud 9 browsers tabs, travel hundreds of kilometres and log back into C9 and all code and bash scripts would reappear.

Here is the Cloud 9 IDE showing code on the left and a Browser on the right.

C9.IDE showing code on the left and web page on the right

With Cloud 9 code could be easily accessed, edited and run.

See screenshot of code running from a Cloud 9 hosted server with properties windows on the right.

C9 IDE showing  a bash terminal windows and code

Regrettably, I cancelled my $9/m subscription to Cloud 9 after a minor stroke and since then I have gone back to a terminal screen to code and transfer files. In the last 2 years.

Screenshot of the putty program connected to a Ubuntu box editing a file with nano

Uploading and downloading files on mass is painful via pure SSH.

AWS has since purchased Cloud 9 and I am not sure if it will ditch support of non-AWS servers in the future. AWS is good but servers are very expensive for what you get IMHO. I have found Disk IO on UpCloud is awesome (also UpCloud support is great (I am not paid to say that)).

PHPStorm IDE?

A quick Google of IDE’s like Cloud 9 mentioned PHPStorm from Jet Brains. I have used IntelliJ IDEA from Jet Brains before and PHPStorm seems to be very popular.

Go to
https://www.jetbrains.com/phpstorm/ and see the features of PHPStorm

Watch PHPStorm in action

Whats new in PHPStorm 2019.1

Install PHPStorm

Visit https://www.jetbrains.com/phpstorm/download/ and download and install PHPStorm (free trial 30 days)

System requirements

  • Microsoft 10/8/7/Vista/2003/XP (incl. 64-bit)
  • 2 GB RAM minimum
  • 4 GB RAM recommended
  • 1024×768 minimum screen resolution

Pricing

PHPStorm is $8.90/m for individual use (or $19.90/m commercial). For the first 12 months of uninterrupted subscription payments qualify you for receiving a perpetual fallback license (20% discount for an uninterrupted subscription for a 2nd year, 40% discount for an uninterrupted subscription for 3rd year onwards).

PHPStorm work on Windows, OSX or Linux. This great an I use Windows locally and Linux remotely but I’m keen to use Linux locally to match local and remote dev environments.

Official PHPStorm Pricing Page:
https://www.jetbrains.com/phpstorm/buy/#edition=commercial

fyi: Jet Brains has free licencing for individual use for Students and faculty members.

Creating a Project

Open PHPStorm and select “Create New Project”.

Create New Project screen

Choose a project type on the left (e.g “PHP Empty Project“) and choose a location to save too (I chose “S:\code\php001) on my local machine. I chose “S:\code\php001” on my local machine.

New Project screenshot asking for a name and save location

Choose a folder to save to.

Choose a project and location to save to locally

Click “OK” to create the project.

PHPStorm will have created a project for you. You will notice a “.idea“folder under the location you saved with these files.

  • misc.xml
  • modules.xml
  • php001.xml
  • wordspace.xml

Do not delete these files.

Creating your first PHP file

You can right click on the project root and select New then PHP File

Right click on the root in the tree view then new PHP File

Or clicking the File then New menu choosing PHP File.

File new PHP File dialog

Name the file (e.g index.php)

Naming a file index.php

The file has been created and its available in my localhost web server.

Screenshot showing PHPStorm with index.php, S:\Code showing index.php and http://localhpst/php001/index.php loading

Creating a Deploy Target

Now we need to specify a deploy target in PHPStorm to push the file changes to the cloud. Backup your server (yes backup your server just in case).

Open your PHPStorm project and click Tools, Deployment and then Configuration.

Click Tools, Deployment and then Configuration.

Click the plus icon near the top left and choose SFTP

Screenshot showing add new deployment server (SFTP)

Name the deployment target (e.g “server (project)”)

Screenshot of an input box showing a server name
  • Enter your “server name” or IP and port
  • Enter your “ssh username” (ensure the SSH user had write access to the wwwroot folder and the web server can read the files written by this user)
  • Under password I chose “Key pair OpenSSH or Putty (as I had SSH details already setup in Putty details
  • You can add your ppk private key from Putty (use the puttygen program to conbvert ssh public and private kets to ppk format)
  • If you have a passphrase on your SSH key add it now
  • Enter your web servers remote path (for the project)
  • Enter your web server URL
Screenshot showing a server name, port, username, password, ssh file passphrase, root path and web server url.

I did SSH to my remote server and created the destination folder. This will ensure I can deploy code here (PHPStorm does not create the remote path fpor you ).

mkdir /wwwroot/php001
chown -R www-data:www-data /wwwroot/php001/

Click Test Connection

Test Successful screenshot

No we need to click the Mappings tab and add a mapping.

  • Local path is your local path
  • Deployment path is / (the web root path is carried forward from the previous tab)
  • Web path is the web path that is entered in the browser
Screenshot showing a manual file mapping of local and remote file locations

Click Add New Mapping. Now we are ready to deploy

Deploying code to the cloud

I right clicked on the root note in PHPStorm and created an index.php file.

Creating an index.php file by file new

I edited the index.php on my local machine and then click the Tools then Deployment and choose “Upload to fearby.com (php001)” menu.

Manual upload available in Tools menu then Deployment menu

The File Transfer output window showed the transfer progress.

Screenshot showing the file transfer window output saying the file uploaded.

I loaded https://fearby.com/php001/index.php in Google chrome. It worked.

Screenshot showing https://fearby.com/php001/index.php loaded in a bowser

Don’t forget to turn off Automatic uploads under Tools, Deployment menu.

'Screenshot showing Automatic updated turned on

Now when I create new files or change existing files they will auto upload.

Sourc Control

I will add this soon.

Shell Command

You can also open an SSH console to the server and run commands

e.g zip files

zip -r backup.zip .

I can also open a folder window in PHPStorm and show all remote files by clicking Tools then Deployment then Show Remote Files, Zip files can be easily downlaoded or other files uploaded. Nice.

Screenshot showing remote files

Linux Client

I will review the Linux PHPStorm client soon.

Troubleshooting

Watch the Official guide on Deployment and Remote Hosts in PhpStorm – PhpStorm Video Tutorial

Good Luck. I hope this guide helps someone.

Version

1.2 Removed advertisements

1.1 Minor Updates

1.0 Initial Version

Filed Under: 2FA, Backup, Cloud, Code, Git, GUI, IDE, Linux, SSH

Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution

November 18, 2018 by Simon

This is how I replacing Google Analytics with Piwik/Matomo for a locally hosted privacy-focused open source analytics solution

Aside

I have a number of guides on moving away from CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. PHP is my programming language of choice.

Now on with the post

Google Analytics

I will fully admit Google Analytics is good. I posted this a while ago on how you can set up Google Analytics on your site.

Google Analytics has some great charts and graphs. Simple to set up and easy to use.

Analitics Home

My site traffic is growing and I would prefer to hold my own analytics on user data. Matomo is an analytics solution that stays on my server and not in the hands of Google.

Blog Growth

Google Analytics can be Slow

Sometimes the Google Analytics server is slow (affecting the speed of my server). I blogged recently about speeding up a WordPress site here and Google Servers were not adding expiry headers on assets.

I did log a ticket with Google to fix this and the experience was terrible.

Support for Google Analytics is terrible

Gogole Analytics support of terrible

GT Metrix scores show poor delivery of tracking assets.

Google Slow Assets

Privacy

After the Cambridge Analytica fiasco (that made me decide to delete facebook) sending analytics to Google is not a good idea.

  • Google Removes ‘Don’t Be Evil’ Clause From Its Code Of Conduct
  • FUTURE SOCIETY Three Signs Google Is Turning to the Dark Side
  • Top 10 Ways Google Does Evil

I am not saying Google is evil but I want my site’s visitors tracking data to remain local.

Website Speed Benchmark before installing Matomo

I can load my site in 1.3 seconds at best, 1.5 seconds on average and 2.0 seconds at worst. My site is loading 11 assets.

GTmetrix 1.3 second page load time

Page Speed Scores

GTMerix page speed load times

Y Slow Scores, Gogol Assets are reporting no expiry headers (slowing down scores)

GTMetrix yslow load times

Google Analytics tracking assets are slow.

Gmetrix waterfall list

Optimizations to be made

Browser caching is not possible with Google Analytics.

Gogole lacking browser caching

Missing Expiry Headers (I can see a Google Tag Manager server is slowing down my servers benchmark score)

Google lacking Expiry Headers

Why Mamoto (instead of Google Analytics)

I came across

Someone pointed out that @haveibeenpwned got a bunch of traction on Reddit today. With pretty much everything now either cached by @Cloudflare or served by @AzureFunctions, the first I know of a 28x traffic increase is no longer when something scales it’s when someone tells me 😎 pic.twitter.com/ifj7nQg3n4

— Troy Hunt (@troyhunt) November 5, 2018

Mamoto was mentioned

It’s an Open Source, self hostable, privacy friendly alternative to Google Analytics:https://t.co/NiK7A7uQAE

— Lukas Winkler (@lw1_at) November 5, 2018

I visited https://matomo.org/

Mamoto webpage

Snip

> Take care of running Matomo yourself by installing it on your own server. There is no cost for Matomo itself but you need a server and update Matomo & your server regularly to keep it fast and secure. Need help? The Matomo team provides free help resources and paid support.

Mamoto Setup Instruction Guide

Source Code

Source code is available.

> Matomo is the leading open alternative to Google Analytics that gives you full control over your data. Matomo lets you easily collect data from websites, apps & the IoT and visualise this data and extract insights. Privacy is built-in. We love Pull Requests! https://matomo.org/

https://github.com/matomo-org/matomo

Installation Guide

I read the installation guide here https://matomo.org/docs/installation/

You can view the changelog here https://matomo.org/changelog/

Downloading Mamoto

I logged into my server via SSH and downloaded the 18MB download to the desired folder

cd /www-root/matomo-folder/
wget https://builds.matomo.org/matomo.zip

I unzipped the zip file

unzip matomo.zip

I loaded the URL where Matoto was installed (e.g “https://fearby.com/folder/subfolder/matomo/”)

I received this well-crafted error.

Matomo File Permission Error

Raw Output

An error occurred
Matomo couldn't write to some directories (running as user 'www-usr').

Advertisement:





Try to Execute the following commands on your server, to allow Write access on these directories:

chown -R www-usr:www-usr /www-root/folder/subfolder/matomo
chmod -R 0755 /www-root/folder/subfolder/matomo/tmp
chmod -R 0755 /www-root/folder/subfolder/matomo/tmp/assets/
chmod -R 0755 /www-root/folder/subfolder/matomo/tmp/cache/
chmod -R 0755 /www-root/folder/subfolder/matomo/tmp/logs/
chmod -R 0755 /www-root/folder/subfolder/matomo/tmp/tcpdf/
chmod -R 0755 /www-root/folder/subfolder/matomo/tmp/templates_c/

If this doesn't work, you can try to create the directories with your FTP software, and set the CHMOD to 0755 (or 0777 if 0755 is not enough). To do so with your FTP software, right click on the directories then click permissions.

After applying the modifications, you can refresh the page.

I refreshed the page after running the commands above on my site (via SSH)

Matomo Setup Step 1

A system check was performed. I installed when PHP 7.2.11 was the latest, PHP 7.2.12 or higher might be available. Follow my guide to update PHP on Ubuntu.

System Check

I had one Issue with Freetype not being installed.

Install Freetype

I solved this error by installing FreeType

sudo apt-get install freetype*

Output

Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'freetype-tools' for glob 'freetype*'
Note, selecting 'freetype2-demos' for glob 'freetype*'
The following NEW packages will be installed:
  freetype2-demos
0 upgraded, 1 newly installed, 0 to remove and 66 not upgraded.
Need to get 123 kB of archives.
After this operation, 728 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 freetype2-demos amd64 2.8.1-2ubuntu2 [123 kB]
Fetched 123 kB in 0s (965 kB/s)
Selecting previously unselected package freetype2-demos.
(Reading database ... 122574 files and directories currently installed.)
Preparing to unpack .../freetype2-demos_2.8.1-2ubuntu2_amd64.deb ...
Unpacking freetype2-demos (2.8.1-2ubuntu2) ...
Processing triggers for man-db (2.8.3-2) ...
Setting up freetype2-demos (2.8.1-2ubuntu2) ...

Then I installed “php-gd”

sudo apt-get install php-gd

Output:

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libapache2-mod-php7.2 php7.2-cli php7.2-common php7.2-curl php7.2-dev php7.2-fpm php7.2-gd php7.2-json php7.2-mbstring php7.2-mysql php7.2-opcache php7.2-readline php7.2-xml php7.2-zip
Recommended packages:
apache2
The following NEW packages will be installed:
php-gd php7.2-gd
The following packages will be upgraded:
libapache2-mod-php7.2 php7.2-cli php7.2-common php7.2-curl php7.2-dev php7.2-fpm php7.2-json php7.2-mbstring php7.2-mysql php7.2-opcache php7.2-readline php7.2-xml php7.2-zip
13 upgraded, 2 newly installed, 0 to remove and 53 not upgraded.
Need to get 33.2 kB/6621 kB of archives.
After this operation, 150 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://ppa.launchpad.net/ondrej/php/ubuntu bionic/main amd64 php7.2-gd amd64 7.2.11-4+ubuntu18.04.1+deb.sury.org+1 [27.1 kB]
Get:2 http://ppa.launchpad.net/ondrej/php/ubuntu bionic/main amd64 php-gd all 2:7.2+68+ubuntu18.04.1+deb.sury.org+1 [6036 B]
Fetched 33.2 kB in 0s (75.9 kB/s)
Reading changelogs... Done
(Reading database ... 122597 files and directories currently installed.)
Preparing to unpack .../00-php7.2-mysql_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-mysql (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../01-php7.2-opcache_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-opcache (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../02-php7.2-json_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-json (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../03-php7.2-readline_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-readline (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../04-php7.2-mbstring_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-mbstring (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../05-php7.2-curl_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-curl (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../06-php7.2-zip_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-zip (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../07-php7.2-fpm_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-fpm (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../08-php7.2-xml_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-xml (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../09-php7.2-dev_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-dev (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../10-libapache2-mod-php7.2_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking libapache2-mod-php7.2 (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../11-php7.2-cli_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-cli (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Preparing to unpack .../12-php7.2-common_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-common (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) over (7.2.11-2+ubuntu18.04.1+deb.sury.org+1) ...
Selecting previously unselected package php7.2-gd.
Preparing to unpack .../13-php7.2-gd_7.2.11-4+ubuntu18.04.1+deb.sury.org+1_amd64.deb ...
Unpacking php7.2-gd (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Selecting previously unselected package php-gd.
Preparing to unpack .../14-php-gd_2%3a7.2+68+ubuntu18.04.1+deb.sury.org+1_all.deb ...
Unpacking php-gd (2:7.2+68+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-common (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Processing triggers for ureadahead (0.100.0-20) ...
Setting up php7.2-curl (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-mbstring (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-readline (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Processing triggers for systemd (237-3ubuntu10.4) ...
Processing triggers for man-db (2.8.3-2) ...
Setting up php7.2-json (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-opcache (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-mysql (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-gd (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...

Creating config file /etc/php/7.2/mods-available/gd.ini with new version
Setting up php7.2-xml (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-zip (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-cli (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php-gd (2:7.2+68+ubuntu18.04.1+deb.sury.org+1) ...
Setting up libapache2-mod-php7.2 (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Warning: Could not load Apache 2.4 maintainer script helper.
Setting up php7.2-dev (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...
Setting up php7.2-fpm (7.2.11-4+ubuntu18.04.1+deb.sury.org+1) ...

I refreshed the Matomo setup wizard page, Freetype is now installed 🙂

FreeType is installed

Database Settings

For the life of me, I could not get Matomo to talk to a database on another server so I set it up on my localhost.

I used this guide to help in mysql CLI to create the database and users.

Enter Matomo Database settings

Commands in mysql to create a database and user and assign the user to the database. If you are not comfortable with MySql CLI you can use Adminder GUI.

CREATE DATABASE tbdatabasename;
GRANT ALL PRIVILEGES ON tbdatabasename.* TO 'databaseuser'@'localhost' IDENTIFIED BY '#####################################';
GRANT SELECT ON tbdatabasename.* TO 'databaseuser'@'localhost';

I used this PHP code to test connecting to the dedicated server before using the localhost

<?php
$servername = "localhost";
$username = "databaseuser";
$password = "#################";
$dbname = "tbdatabasename";

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);

// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
} else {
        echo "Connection Success";
}

$conn->close();
?>

Database created ok

Database OK

I created a Matomo user then I grabbed the javascript tracking ID code so I could paste this into WordPress.

Matomo Tracking ID

I opened my WordPress theme settings and deleted the Google tracking tags and added the Matomo tracking code.

Delete Google Tracking Tags

I added the Matomo tracking javascript in the head section.

The dashboard is up and collecting data.

Matomo Dashboard

Some reports are missing data so I will come back later.

After 1 week I could see data

Matomo is not collecting daya

Securing Mamoto

I read this guide here to secure Matomo

Opt Out Tracking

I enabled Opt Out Tracking in the Mamoto settings and added the generated opt-out code to my front page and at the bottom or all existing articles.

I had to allow iframe tags on my site by adding this header in NGINX (previously I blocked iframes)

add_header X-Frame-Options sameorigin

Add Opt Out Tracking Code to WordPress.

Matomo Opt Out Added to WordPRess widgets

I updated my privacy page and my GDPR notification bar. Now visitors will see a opt-out of tracking on the front page and all article pages.

Opt out of tracking enabled

SMTP Settings

I added my GSuite mail server settings to enable sending of reports via email. I loaded my old guide here to get the GSuite SMTP settings.

GSuite SMTP Settings Added

I enabled force https on the Mamoto application (edited: config/config.ini.php file)

[General]
...
force_ssl = 1

Matomo Plugins (Marketplace)

I opened the System then Plugins section of Matomo to open the Marketplace

Plugins

I installed these plugins

  • Force SSL
  • HidePasswordReset
  • Google Authenticator
  • Device Pixel Ratio
  • Bandwidth
  • Js Tracker Force Async
  • Treemap Visualization
  • Security Info
  • Custom Alerts
  • IP Reports
  • Live Tab
  • etc

Updating PHP

Matomo Admin (Panel – Security/Diagnostics) section will report if your PHP gets out of date.

Matomo warning of PHP being out of date

Hardening Advice

I enabled 2fA Authorisation at logins (Google Analytics Plugin).

Matomo 2fa Login screenshot

Read my guide here on hardware 2FA YubiCo YubiKeys here.

php.ini hardening changes

Matomo also recommended some php.ini file changes.

> open_basedir – open_basedir is disabled. When this is enabled, only files that are in the given directory/directories and their subdirectories can be read by PHP scripts. You should consider turning this on. Keep in mind that other web applications not written in PHP will not be restricted by this setting.

> upload_tmp_dir – upload_tmp_dir is disabled, or is set to a common world-writable directory. This typically allows other users on this server to access temporary copies of files uploaded via your PHP scripts. You should set upload_tmp_dir to a non-world-readable directory

This may break your WordPress so enable at your own risk. I might move Mamoto to a dedicated “analytics” subdomain then enable these options.

Troubleshooting

I had to run this command when installing Device Pixel Ratio, Device Network Information, Bandwidth plugins

php /www-root/path/matomo/console core:update

Output:

    *** Update ***

    Database Upgrade Required

    Your Matomo database is out-of-date, and must be upgraded before you can continue.

    The following dimensions will be updated: log_visit.device_pixel_ratio.

    *** Note: this is a Dry Run ***

    ALTER TABLE `matomo_log_visit` ADD COLUMN `device_pixel_ratio` DECIMAL(5,2) DEFAULT NULL;

    *** End of Dry Run ***

A database upgrade is required. Execute update? (y/N) y

Starting the database upgrade process now. This may take a while, so please be patient.

    *** Update ***

    Database Upgrade Required

    Your Matomo database is out-of-date, and must be upgraded before you can continue.

    The following dimensions will be updated: log_visit.device_pixel_ratio.

    The database upgrade process may take a while, so please be patient.

  Executing ALTER TABLE `matomo_log_visit` ADD COLUMN `device_pixel_ratio` DECIMAL(5,2) DEFAULT NULL;... Done. [1 / 1]

Matomo has been successfully updated!

GTMetrix (After)

GT Metrix reports that my site is not slower (still 1.5 seconds)

GTMetrix After Pagespeed

I can see that some JavaScript is not being picked up by CDN.

GTMetrix After YSlow

Also 2 More files loading (when compared to Google Analytics)

2 More Files

Time to add the Mamoto files to my CDN.

Adding Matomo Resources to a CDN

I read this Matomo forum post.

I copied these 2 assets to my WordPress wp-content folder (my WordPress CDN ewww.io will then upload them to the CDN).

cd /www-root/wp-content/
cp /www-root/utils/matomo/piwik.js ./piwik.js
cp /www-root/utils/matomo/plugins/CoreAdminHome/javascripts/optOut.js ./optOut.js
chown www-data:www-data *.js

I have cache everything enabled in ewww.io and this will copy the javascript assets ot my CDN.  I will need to manually update these js files each time a Matomo update is installed.

I change my Matomo tracker code to include the new CDN location

<!-- Matomo -->
<script type="text/javascript">
  var _paq = _paq || [];
  /* tracker methods like "setCustomDimension" should be called before "trackPageView" */
  _paq.push(['trackPageView']);
  _paq.push(['enableLinkTracking']);
  (function() {
    var u="//fearby.com/utils/matomo/";
    _paq.push(['setTrackerUrl', u+'piwik.php']);
    _paq.push(['setSiteId', '1']);
    var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];
    g.type='text/javascript'; g.async=true; g.defer=true; g.src='https://fearby-com.exactdn.com/wp-content/piwik.js'; s.parentNode.insertBefore(g,s);
  })();
</script>
<!-- End Matomo Code -->

I could not find out how to change the location of my (now CDN cached https://fearby-com.exactdn.com/wp-content/optOut.js) so I temporarily disabled the opt-out form on my front page.

todo: Find out how to change the CDN location of optOut.js and re-enabled the form.

All assets are loading from CDN.

GT Metrix shows my site loads in 1.4 seconds

Analytics Reporting

Graphs are not as pretty as Google Analytics but they are working.

Matomo is not collecting daya

Mobile Reporting

Mobile reporting is good too.

Screenshot of the Matomo Mobile app

Updating Matomo Plugins

Don’t forget to update your plugins from the Matomo dashboard.

Updating Matomo (Core)

Matomo has an official guide on how to update Matomo here.

I do not have FTP so I will perform the manual three step update.

But before I do that I will manually backup my web server and database server just in case.

I backed up my Matomo config (I SSH”ed to the server)

$ cd /www-root/matomo-root/

$ cp ./config.ini.php ./config.ini.3.x.x.php

I navigated to the folder above my Matomo folder

$ cd ..

$ cd ..

I downloaded Matomo

$ wget https://builds.matomo.org/matomo.zip

I unzipped the zip file

$ unzip -o matomo.zip

I removed the matomo.zip file

$ rm matomo.zip

I loaded the Matomo Login page again and was prompted to update the database.

Matomo Database Update Required

Matomo reported it was updated Successfully.

Matomo was updated message

Oops, and error in config error appeared when I tried to log in.

Matomo Error in config

Oh, Do I need to replace the config file with my backed up config file?

(edit: Yes Matomo say to do this, my bad)

Ten seconds later I accidentally deleted all my config files (I had zero backups), the next 2 minutes were spent shutting down my servers (web and db) and restoring them from backup. Thank goodness UpCloud are awesome hosts.

I now had to restore my servers and repeat the steps but this time restore my config file before logging back in.

I did this but had the same error

> An error occurred
> Authentication object cannot be found in the container. Maybe the Login plugin is not activated?
> You can activate the plugin by adding:
> Plugins[] = Login under the [Plugins] section in your config/config.ini.php

I checked my replaced config.ini.php and it did have

> [PluginsInstalled]
> PluginsInstalled[] = “Login”

I googled and found this page that said reset your password (this was not an option as Matomo was not loading)

I logged into mysql with my Matomo user

> mysql -u matomodbusername -p
> Enter password:
> Welcome to the MySQL monitor. Commands end with ; or \g.
> Server version: 5.7.xxxx

> Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

> Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

> Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

> mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| matomodb |
+——————–+
> 2 rows in set (0.00 sec)

The account and database seem ok.

I tried “FLUSH PRIVILEGES;” with no luck

I tried to sop mysql but it was locked

It was late so I rebooted my server (it did not come back up after a few minutes, I forced a reboot)

I still had an “Authentication object cannot be found in the container.” error when trying to login to Matomo???

I re-checked the “config.ini.php” file after reding threads at the Matomo Forums

$ sudo nano /www-root/matomo-root/config.ini.php

“Plugins[] = “Login”” was not in the “[Plugins]” area of the file???  I added it, saved the change and was able to reload the Matomo GUI.

I checked some key reports.

Visitors over time:

Visitors over time report

Visitor Location Map

Visitor Location Map

Visitor Overview

Visitor Overview

Out links Clicked

Out links Clicked

Nice

I subscribed to the Matomo newsletter here to keep up to date with Matomo update releases: https://matomo.org/newsletter/

Good luck and I hope this guide helps someone

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.2 Hardening info

v1.1 Updating Matomo

v1.0 Initial post

Filed Under: Analytics, Cloud, Free, Privacy Tagged With: a, analytics, focused, for, google, hosted, locally, Matomo, Open, Piwik, privacy, Replacing, solution, source, with

Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc

October 3, 2018 by Simon

This is a draft post showing how you can monitor the performance of a server (or servers) with NixStats and receive alerts by SMS, Push, Email, Telegram etc

fyi: This is not a paid post, this is just me using the NixStats software to monitor my servers and send alerts.

Finding a good host

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Monitoring Servers

The post below will show you how you can monitor servers online with https://nixstats.com/ and send alerts when resources reach limits or servers fail.

https://nixstats.com/

I signed up and started a Nixstats (14 day free trial).

Start Nixstats Trial

After I created an account I was emailed by Nixtsats with agent install instructions for Linux (1 line). I was also advised to add contacts and to set up alerts.

I logged into the Nixstats settings and set up…

  • My Timezone
  • Default reporting period
  • First name and Surname
  • Reporting emails
  • etc

Nixstats Subscription Upgrade

Subscription options

  • Free (5 monitors, 1 server, 24-hour data retention etc)
  • Founder (25 monitors, 10 servers, 30-day data retention etc)
  • Business (100 monitors, 15 servers, 60-day data retention etc)

Subscription Options

I enabled the limited founder subscription so I can monitor 10x servers (this deal is too good to miss). I tried creating a status page myself last year and it is terribly hard.

Subscriptiosn page

I am now out of the free trial period 🙂 Let’s start monitoring many servers.

Subscription Activated

I enabled two factor Auth to Nixstats logins

Nixstats Two Factor Auth

I created a Nixstats API key for future use (watch this space)

Create API Key

I installed the Nixstats agent (the dashboard gave a 1 line command you can run as root to install the agent (on Linux)).

Instal Nixstats Agent

FYI: Command (######################## is a number linked to your account)

wget --no-check-certificate -N https://www.nixstats.com/nixstatsagent.sh && bash nixstatsagent.sh ########################

Output

wget --no-check-certificate -N https://www.nixstats.com/nixstatsagent.sh && bash nixstatsagent.sh ########################
--2018-10-02 09:53:56--  https://www.nixstats.com/nixstatsagent.sh
Resolving www.nixstats.com (www.nixstats.com)... 2400:cb00:2048:1::6819:8013, 2400:cb00:2048:1::6819:8113, 104.25.128.19, ...
Connecting to www.nixstats.com (www.nixstats.com)|2400:cb00:2048:1::6819:8013|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38708 (38K) [application/octet-stream]
Saving to: 'nixstatsagent.sh'

nixstatsagent.sh 100%[====================================================>] 37.80K –.-KB/s in 0.1s

2018-10-02 09:53:56 (338 KB/s) – ‘nixstatsagent.sh’ saved [38708/38708]

Found Ubuntu …
Installing …
Installing Python2-PIP …
Installing nixstatsagent …
Generation a server id …
Got server_id: ######################
Creating and starting service
Created symlink /etc/systemd/system/multi-user.target.wants/nixstatsagent.service -> /etc/systemd/system/nixstatsagent.service.
Created the nixstatsagent service

Server Dashboard
below is a summary of all connected servers ( https://nixstats.com/dashboard/servers ).

Server Sumamry

Monitor Setup

I set up a number of monitors to monitor ping replies and https traffic

Monitors

Advanced Monitoring

I can also set the monitor credentials, timeouts, retries, auth methods, max redirects and frequency. If you server blocks login or resource GET attempts you may need to whitelist IP’s. IP’s of monitoring servers are located here https://nixstats.com/whitelist.php

Monitoring advanced options

Monitor Summary

The default dashboard is very informative. Feel free to create your own dashboards that focus on your own infrastructure or apps.

Monitor Summary

Individual Server Reports

You can click on a server and monitor it in detail.

Nixstats Graphs

Server Memory Graphs

Long-term memory graphs.

Memory Graph

Install Optional Nixstats Plugins

Nixstats offers many plugins to monitor software that is installed on your server (e.g NGINX, MySQL, PHP etc).

1) NGINX Monitoring (Plugin)

To enable NGINX monitoring I read https://help.nixstats.com/en/article/monitoring-nginx-50nu7f/

I edited my NGINX sites-enabled config.

sudo nano /etc/nginx/sites-enabled/default

I added the following

server {
    listen 127.0.0.1:8080;
    server_name localhost;
    location /status_page {
        stub_status on;
        allow 127.0.0.1;
        deny all;
    }
}

I tested, reloaded and restarted NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

The status page will only be available on the local machine, I tested the page on the local machine

wget -qO- http://127.0.0.1:8080/status_nginx
Active connections: 3
server accepts handled requests
 15 15 31
Reading: 0 Writing: 1 Waiting: 2

It’s Working.

I edit /etc/nixstats.ini

sudo nano /etc/nixstats.ini

I remove comments before these lines to enable the plugin

[nginx]
enabled = yes
status_page_url = http://127.0.0.1:8080/status_nginx

I ran the following command to see if NGINX monitoring is possible

nixstatsagent --test nginx

Output

nginx:
{
    "accepts": 39,
    "accepts_per_second": 0.0,
    "active_connections": 6,
    "handled": 39,
    "handled_per_second": 0.0,
    "reading": 0,
    "requests": 119,
    "requests_per_second": 0.0,
    "waiting": 5,
    "writing": 1
}

It’s Working

I restarted the nixstatsagent

service nixstatsagent restart

I can now view NGINX properties like active_connections in my dashboard. 🙂

2) Enable PHP-FPM Monitoring (Plugin)

Looks like a PHP-FPM monitoring was recently added lets set that up too. Read my guide on setting up PHP child workers here.

We’ve added a premade dashboard for PHP-FPM. If you’re not yet monitoring PHP-FPM take a look at the integration guide https://t.co/X4ywRHw9hX pic.twitter.com/aag1fTsr3R

— Nixstats (@nixstats) September 6, 2018

To enable PHP-FPM monitoring I read https://help.nixstats.com/en/article/monitoring-php-fpm-1tlyur6/

I edited my PHP-FPM ini file

sudo nano /etc/php/7.2/fpm/pool.d/www.conf

I added the following line

pm.status_path = /status_phpfpm

Restart PHP

sudo service php7.2-fpm restart

I added the following to /etc/nginx/sites-enabled/default localhost server block added above Note I use php 7.2 below (read more here).

server {
    listen 127.0.0.1:8080;
    server_name localhost;

location /status_phpfpm {
access_log off;
allow 127.0.0.1;
deny all;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
log_not_found off;
}
}

I tested, reloaded and restarted NGINX

nginx -t
nginx -s reload
/etc/init.d/nginx restart

I restart PHP-FPM

sudo systemctl restart php7.2-fpm

Enabled the plugin in /etc/nixstats.ini

[phpfpm]
enabled = yes
status_page_url = http://127.0.0.1:8080/status_phpfpm?json

I tested the status page

wget -qO- http://127.0.0.1:8080/status_phpfpm?json

Output:

{
	"pool":"www",
	"process manager":"static",
	"start time":1538654543,
	"start since":178,
	"accepted conn":28,
	"listen queue":0,
	"max listen queue":0,
	"listen queue len":0,
	"idle processes":49,
	"active processes":1,
	"total processes":50,
	"max active processes":2,
	"max children reached":0,
	"slow requests":0
}

I tested the agent

 nixstatsagent --test phpfpm

Output:

phpfpm:
{
    "accepted_conn": 51,
    "accepted_conn_per_second": 0.0,
    "active_processes": 1,
    "idle_processes": 49,
    "listen_queue": 0,
    "listen_queue_len": 0,
    "max_active_processes": 2,
    "max_children_reached": 0,
    "max_listen_queue": 0,
    "pool": "www",
    "process_manager": "static",
    "slow_requests": 0,
    "start_since": 318,
    "start_time": 1538654543,
    "total_processes": 50
}

I can now query PHP-FPM status values in Nixstats 🙂

Query PHP-FPM

Enable MySQL Monitoring (Plugin)

To enable MySQL monitoring I read https://help.nixstats.com/en/article/monitoring-mysql-1frskd8/

I edited the nixstats.ini

sudo nano /etc/nixstats.ini

I enabled the mysql section in nixstats.ini and added my mysql credentials

[mysql]
enabled=yes
username=mysqluser
password=#######################
host=127.0.0.1
database=mysql
port=3306
socket=null

I ran this command to test MySQL querying

nixstatsagent --test mysql
mysql:
Load error: No module named MySQLdb

I had an error.

A quick Google revealed I had to install a mysql python module.

sudo apt-get install python-mysqldb

Allow localhost to connect to MySQL

Edit /etc/mysql.cnf and allow all localhost and external connections (I could not bind to localhost and an external IP at the same time)

bind-address    = 0.0.0.0

TIP: Ensure you have firewalled access to your MySQL server, never open it up without protection.

Let’s try again

nixstatsagent --test mysql

Output

mysql:
{
    "aborted_clients": 0,
    "aborted_connects": 0,
    "binlog_cache_disk_use": 0,
    "binlog_cache_use": 0,
    "bytes_received": 0,
    "bytes_sent": 0,
    "com_delete": 0,
    "com_delete_multi": 0,
    "com_insert": 0,
    "com_insert_select": 0,
    "com_load": 0,
    "com_replace": 0,
    "com_replace_select": 0,
    "com_select": 0,
    "com_update": 0,
    "com_update_multi": 0,
    "connections": 0,
    "created_tmp_disk_tables": 0,
    "created_tmp_files": 0,
    "created_tmp_tables": 0,
    "key_read_requests": 0,
    "key_reads": 0,
    "key_write_requests": 0,
    "key_writes": 0,
    "max_used_connections": 3.0,
    "open_files": 14.0,
    "open_tables": 316.0,
    "opened_tables": 0,
    "qcache_free_blocks": 1.0,
    "qcache_free_memory": 16760152.0,
    "qcache_hits": 0,
    "qcache_inserts": 0,
    "qcache_lowmem_prunes": 0,
    "qcache_not_cached": 0,
    "qcache_queries_in_cache": 0,
    "qcache_total_blocks": 1.0,
    "questions": 0,
    "select_full_join": 0,
    "select_full_range_join": 0,
    "select_range": 0,
    "select_range_check": 0,
    "select_scan": 0,
    "slave_open_temp_tables": 0.0,
    "slow_launch_threads": 0,
    "slow_queries": 0,
    "sort_range": 0,
    "sort_rows": 0,
    "sort_scan": 0,
    "table_locks_immediate": 0,
    "table_locks_waited": 0,
    "threads_cached": 2.0,
    "threads_connected": 1.0,
    "threads_created": 0,
    "threads_running": 1.0,
    "uptime": 35837.0
}

Nice,

I restart MySQL

sudo systemctl restart mysql

I restart my Nixstats service

service nixstatsagent restart

Now let’s monitor MySQL in Nixstats

I can now view MySQL metrix

MySQL MEtrix

Status Page

Nixstats allows you to create a status page ( https://nixstats.com/pages/overview ) where you can add any servers or monitors to that page. This stats page is truly awesome, it builds a live status page based on data coming from your installed agents.

You can even set up a custom subdomain that points to a Nixstats hosted status page too (e.g https://status.yourdomain.com).

FYI: An SSL certificate on your staus page may take a few hours to set up. Don’t panic if it is not instantatly available.

Custom Status Page

Nice.

This saves doing it yourself. The status page will look like it running on your server.

Status Page

You can create a status page that automatically aggregates collected data from monitors and displays them in a nice layout.

Status Page

This is great, I used to do my own status pages but not anymore.

Alerts

I added a contact so I could receive alerts. I could then add my mobile, email and PushOver key (to receive push notifications) and Telegram Bot API token.

Contact

Test Alerts

I sent a test alert to each service against the contact.

Test Alerts

I activated a Pushover licence on my Android device for about $7.49 AUD (one off) to ensure I keep getting Push Notifications.

Bought licence for PushOver

Nixstats have links that show you how you can create a Telegram Bot and Pushover.net account.

Pushover will cost about $5 USD one off per device (see faq).

I created the following alerts

Alert: Disk Usage higher than 90%

Alert Disk Usage higher than 90%

Alert: Load greater than 90 per cent for 1 minute

Alert load greater than 90 percent for 1 minute

Alert: Less than 5 percent memory free.

Alert less than 5 percent memory free

Summary of alerts.

Alert Sumamry

I also added a CPU reached 95% one for 5 mins alert too (but it’s not pictured above)

I forgot to specify alert recipients and methods for each alert so I edited each alert and added the contact.

Selected Alert Recipients and methods

Now it’s time to test the alerts.

I shut down a server to test alerts

shutdown -h now

Alerts to my defined Email, SMS, Push and Telegram are working a treat 🙂

Alerts Working

After I rebooted the server I also received alerts about the server being back up.

The status page showed the server that was offline too.

Server Offline

Nice

Troubleshooting

I had an issue instaling the agent on Debian

I ran the following command

wget --no-check-certificate -N https://www.nixstats.com/nixstatsagent.sh && bash nixstatsagent.sh #######################
--2018-10-02 00:41:38--  https://www.nixstats.com/nixstatsagent.sh
Resolving www.nixstats.com (www.nixstats.com)... 2400:cb00:2048:1::6819:8113, 2400:cb00:2048:1::6819:8013, 104.25.129.19, ...
Connecting to www.nixstats.com (www.nixstats.com)|2400:cb00:2048:1::6819:8113|:443... connected.
HTTP request sent, awaiting response... 304 Not Modified
File 'nixstatsagent.sh' not modified on server. Omitting download.

nixstatsagent.sh: line 508: [: Installer exited with error code 0. See nixstatsagent.log for details.: integer expression expected

An error occurred, please check the install log file (nixstatsagent.log)!

Contents of nixstatsagent.log

cat nixstatsagent.log
Ign:1 http://deb.debian.org/debian stretch InRelease
Hit:2 http://deb.debian.org/debian-security stretch/updates InRelease
Hit:3 http://deb.debian.org/debian stretch-updates InRelease
Hit:4 http://deb.debian.org/debian stretch Release
Ign:5 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic InRelease
Ign:7 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic Release
Ign:8 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main all Packages
Hit:9 https://packages.sury.org/php stretch InRelease
Ign:10 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main amd64 Packages
Ign:11 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main Translation-en
Ign:8 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main all Packages
Ign:10 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main amd64 Packages
Ign:11 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main Translation-en
Ign:8 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main all Packages
Ign:10 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main amd64 Packages
Ign:11 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main Translation-en
Ign:8 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main all Packages
Ign:10 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main amd64 Packages
Ign:11 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main Translation-en
Ign:8 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main all Packages
Ign:10 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main amd64 Packages
Ign:11 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main Translation-en
Ign:8 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main all Packages
Err:10 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main amd64 Packages
  404  Not Found
Ign:11 http://ppa.launchpad.net/ondrej/php/ubuntu cosmic/main Translation-en
Reading package lists...
W: The repository 'http://ppa.launchpad.net/ondrej/php/ubuntu cosmic Release' does not have a Release file.
E: Failed to fetch http://ppa.launchpad.net/ondrej/php/ubuntu/dists/cosmic/main/binary-amd64/Packages  404  Not Found
E: Some index files failed to download. They have been ignored, or old ones used instead.
nixstatsagent.sh: line 118: apt-get upgrade returned error code 100. Please see nixstatsagent.log for details.: command not found

I asked the Nixstats chat help and I was advised I had a dead repository (I removed this (editing the dead repo in the appropriate file in /etc/apt/) and all was ok)

I had trouble testing my Telegram alerts but it was my fault as I forgot to follow the bot account I created. Telegram does not allow message from a user (bot) unless you follow them.

A chat with the Nixstats staff sorted me out. Thanks, Nixstats chat team.

Nixstats chat

I had an issue with a missing python mysql package

Load error: No module named MySQLdb

I solved it by instaling python-mysqldb

sudo apt-get install python-mysqldb

Nixstats Help

Nixstats have a help subdomain: https://help.nixstats.com/en/

Nixstats Help

Error Logs Plugin

I did ask Nixstats on Twitter and they said they are working on a logging plugin, I can’t wait for that.

We’re launching a closed beta for Logging at Nixstats. Contact us to get setup! You can search and tail log files across all your servers! pic.twitter.com/FIeip2SOUw

— Nixstats (@nixstats) October 4, 2018

I now have access to beta log features and can see log tabs in Nixstats

I had or check the version of my rsyslogd

rsyslogd -v
rsyslogd 8.32.0, compiled with:
        PLATFORM:                               x86_64-pc-linux-gnu
        PLATFORM (lsb_release -d):
        FEATURE_REGEXP:                         Yes
        GSSAPI Kerberos 5 support:              Yes
        FEATURE_DEBUG (debug build, slow code): No
        32bit Atomic operations supported:      Yes
        64bit Atomic operations supported:      Yes
        memory allocator:                       system default
        Runtime Instrumentation (slow code):    No
        uuid support:                           Yes
        systemd support:                        Yes
        Number of Bits in RainerScript integers: 64

I edited: /etc/rsyslog.d/31-nixstats.conf

I pasted

##########################################################
### Rsyslog Template for Nixstats ###
##########################################################

$WorkDirectory /var/spool/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down

template(name=”NixFormat” type=”string”
string=”<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [[email protected] tag=\”rsyslog\”] %msg%\n”
)

action(type=”omfwd” protocol=”tcp” target=”log.nixstats.com” port=”514″ template=”NixFormat”)
#################END CONFIG FILE#########################

I restarted the rsyslog service

sudo service rsyslog restart

Live Log Output

I can see a live log from (unknown) logs.

I can see the firewall blocking access to certain ports.

Live Log

Search logs

Search

Blacklist Checking (Beta)

Nixstats tweeted “We just launched a new blacklist check feature. Monitor your IP and hostname reputation. Free during the Beta!”

I enabled it.

I received a Blacklist notification

I requested removal at junkmailfilter.com

Thanks, Nixstats

Conclusion

This is one of the best software packages I have seen in a while. I have developed status

pages in the past (no more). Great work Nixstats.

Please check out Nixstats today, they are awesome. Signup for a free account and consider the limited time founder plan (it’s a bargain).

Nixstats Live chat support is awesome

Server Plug

If you need a server, consider using my referral code and get $25 UpCloud VM credit if you need to create a server online.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.3 Blacklist beta

V1.2 Logs beta

v1.1 Logging Tweet

v1.0 Initial Post

Filed Under: Alert, Analytics, Cloud, Domain, Monitor Tagged With: alerts, and, by, email, etc, monitor, NixStats, Performance, push, receive, server, SMS, Telegram, with

Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx

July 17, 2018 by Simon

This is a quick post that shows how I set up the “Feature-Policy”, “Referrer-Policy” and “Content Security Policy” headers in Nginx to tighter security and privacy.

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Add a Feature Policy Header

Upon visiting https://securityheaders.com/ I found references to a Feature-Policy header (WC3 internet standard) that allows you to define what browse features you webpage can use along with other headers.

Google mentions the Feature-Policy header here.

Browser features that we can enable or block with feature-policy headers.

  • geolocation
  • midi
  • notifications
  • push
  • sync-xhr
  • microphone
  • camera
  • magnetometer
  • gyroscope
  • speaker
  • vibrate
  • fullscreen
  • payment

Feature Policy Values

  • * = The feature is allowed in documents in top-level browsing contexts by default, and when allowed, is allowed by default to documents in nested browsing contexts.
  • self = The feature is allowed in documents in top-level browsing contexts by default, and when allowed, is allowed by default to same-origin domain documents in nested browsing contexts, but is disallowed by default in cross-origin documents in nested browsing contexts.
  • none = The feature is disallowed in documents in top-level browsing contexts by default and is also disallowed by default to documents in nested browsing contexts.

My Final Feature Policy Header

I added this header to Nginx

sudo nano /etc/nginx/sites-available/default

This essentially disables all browser features when visitors access my site

add_header Feature-Policy "geolocation none;midi none;notifications none;push none;sync-xhr none;microphone none;camera none;magnetometer none;gyroscope none;speaker self;vibrate none;fullscreen self;payment none;";

I reloaded Nginx config and restart Nginx

nginx -t
nginx -s reload
/etc/init.d/nginx restart

Feature-Policy Results

I verified my feature-policy header with https://securityheaders.com/

Feature Policy score from https://securityheaders.com/?q=fearby.com&followRedirects=on

Nice, Feature -Policy is now enabled.

Now I need to enable the following headers

  • Content-Security-Policy (read more here)
  • Referer-Policy (read more here)

Add a Referrer-Policy Header

I added this header configuration in Nginx to prevent referrers being leaked over insecure protocols.

add_header Referrer-Policy "no-referrer-when-downgrade";

Referrer-Policy Results

Again, I verified my referrer policy header with https://securityheaders.com/

Referrer Policy resu;ts from https://securityheaders.com/?q=fearby.com&followRedirects=on

Done, now I just need to setup Content Security Policy.

Add a Content Security Policy header

I read my old guide on Beyond SSL with Content Security Policy, Public Key Pinning etc before setting up a Content Security policy again (I had disabled it a while ago). Setting a fully working CSP is very complex and if you don’t want to review CSP errors and modify the CSP over time this may not be for you.

Read more about Content Security Policy here: https://content-security-policy.com/

I added my old CSP to Nginx

> add_header Content-Security-Policy "default-src 'self'; frame-ancestors 'self'; script-src 'self' 'unsafe-inline' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; style-src 'self' 'unsafe-inline' https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; img-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:*; font-src 'self' data: https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://fonts.gstatic.com:* https://cdn.joinhoney.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; connect-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; media-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:*; child-src 'self' https://player.vimeo.com https://fearby-com.exactdn.com:* https://www.youtube.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; form-action 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://fearby-com.exactdn.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; " always;

I then imported the CSP into https://report-uri.com/home/generate and enabled more recent CSP values.

add_header Content-Security-Policy "default-src 'self' ; script-src * 'self' data: 'unsafe-inline' 'unsafe-eval' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:* https://pagead2.googlesyndication.com:* https://www.youtube.com:* https://adservice.google.com.au:* https://s.ytimg.com:* about; style-src 'self' data: 'unsafe-inline' https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:*; img-src 'self' data: https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:* https://a.impactradius-go.com:* https://www.paypalobjects.com:* https://namecheap.pxf.io:* https://www.paypalobjects.com:* https://stats.g.doubleclick.net:* https://*.doubleclick.net:* https://stats.g.doubleclick.net:* https://www.ojrq.net:* https://ak1s.abmr.net:* https://*.abmr.net:*; font-src 'self' data: https://fearby.com:* https://fearby-com.exactdn.com:* https://fonts.googleapis.com:* https://fonts.gstatic.com:* https://cdn.joinhoney.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:* https://googleads.g.doubleclick.net:*; connect-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; media-src 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://*.google-analytics.com https://*.google.com https://www.googletagmanager.com:* https://secure.gravatar.com:* https://www.google-analytics.com:*; object-src 'self' ; child-src 'self' https://player.vimeo.com https://fearby-com.exactdn.com:* https://www.youtube.com https://www.googletagmanager.com:* https://www.google-analytics.com:*; frame-src 'self' https://www.youtube.com:* https://googleads.g.doubleclick.net:* https://*doubleclick.net; worker-src 'self' ; frame-ancestors 'self' ; form-action 'self' https://fearby.com:* https://fearby-com.exactdn.com:* https://fearby-com.exactdn.com:* https://www.googletagmanager.com:* https://www.google-analytics.com:* https://www.google-analytics.com:*; upgrade-insecure-requests; block-all-mixed-content; disown-opener; reflected-xss block; base-uri https://fearby.com:*; manifest-src 'self' 'self' 'self'; referrer no-referrer-when-downgrade; report-uri https://fearby.report-uri.com/r/d/csp/enforce;" always;

I restarted Nginx

nginx -t
nginx -s reload
/etc/init.d/nginx restart

I loaded the Google Developer Console to see any CSP errors when loading my site.

CPS Errors

I enabled reporting of CSP errors to https://fearby.report-uri.com/r/d/csp/enforce

Fyi: Content Security Policy OWASP Cheat Sheet.

You can validate CSP with https://cspvalidator.org

Now I won’t have to check my Chrome Developer Console and visitors to my site will report errors. I can see my site’s visitors CSP errors at https://report-uri.com/

report-cri.com Report

Content Security Policy Results

I reviewed the reported errors and made some more CSP changes. I will continue to lock down my CSP and make more changes before making this CSP policy live.

I verified my header with https://securityheaders.com/

Security Headers report from https://securityheaders.com/?q=https%3A%2F%2Ffearby.com&followRedirects=on

Testing Policies

TIP: Use the header name of “Content-Security-Policy-Report-Only” instead of “Content-Security-Policy” to report errors before making CSP changes live.

I did not want to go live too soon, I had issues with some WordPress plugins not working in the WordPress admins screens.

Reviewing Errors

Do check your reported errors and update your CSP often, I had a post with a load of Twitter-related errors.

Do check report-uri errors.

I hope this guide helps someone.

Please consider using my referral code and get $25 UpCloud VM credit if you need to create a server online.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.3 https://cspvalidator.org

v1.2 OWASP Cheat Sheet.

v1.1 added info on WordPress errors.

v1.0 Initial Post

Filed Under: Audit, Cloud, Content Security Policy, Development, Feature-Policy, HTTPS, NGINX, Referrer-Policy, Security, Ubuntu Tagged With: Content Security Policy, CSP, Feature-Policy, nginx, Referrer-Policy, security

Upgrading an Ubuntu server on UpCloud to add more CPU, Memory and Disk Space

June 25, 2018 by Simon

Upgrading an Ubuntu server on UpCloud to add more CPU, Memory and Disk Space

If you have not read my previous posts I have now moved my blog from Vultr to the awesome UpCloud host (signup using this link to get $25 free credit).

Recently I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Spoiler: UpCloud performance is great.

Upcloud Site Speed in GTMetrix

Why Upgrade

I have 1 CPU, 1 GB memory and 50GB storage and it is running well?  I have PHP child workers set up and have set up the preferred use of memory over swap file usage.

View of htop querying processes on Ubuntu

Before UpCloud, when I had 512MB ram on Vultr I had multiple NGINX crashed a day so I used a bash script and scheduled a cron job to clear memory cache when memory fell below 100MB (view the script here).  To further increase the speed of the WordPress I have configured the OS to use memory over the disk.  About once a day free memory falls below 100MB (this is not a problem as my script clears cached items automatically).

Graph of memory falling below 100MB every day

I’d like to add more memory as I am working on some things (watch this space) and I will use the extra memory. I’d prefer the server is set up now for the expected workload.

How to Upgrade

This is how I upgraded from 1xCPU/2GB Memory/50GB Storage/2TB Transferred Data to a 2 CPU/4GB Mmeory/80GB Storage/4TB Transferred Data server.

UpCloud Pricing: https://www.upcloud.com/pricing/

Pricing table form https://www.upcloud.com/pricing/

Upgrade an UpCloud VM

I shut down my existing VM. Read this guide to setup a VM.

shutdown -P

Login to the UpCloud dashboard, select your server (confirm the server has shut down) and click General Settings, choose the upgrade and click Update.

Upgrade the Server, shut it down the server and choose upgrade

I confirmed the upgrade options (2x CPU, 4096 MB Memory).

Confirm Upgrade Options

Click Update

Upgrade Applied

After 10 seconds you can start your server from the UpCloud server.

I confirmed the CPU upgrade was visible in the VM

cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz
stepping        : 2
microcode       : 0x1
cpu MHz         : 3099.978
cache size      : 16384 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips        : 6199.95
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz
stepping        : 2
microcode       : 0x1
cpu MHz         : 3099.978
cache size      : 16384 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips        : 6199.95
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Software Tweaks Post Upgrade.

I added these settings to the top of /etc/nginx/nginx.conf to ensure the extra CPU was used.

worker_processes auto;
worker_cpu_affinity auto;

I increased PHP FPM ( /etc/php/7.2/fpm/php.ini ) to increase memory usage and child workers. I doubled child workers and max memory limit.

memory_limit = 3072M
pm.max_children = 80
pm.start_servers = 30
pm.min_spare_servers = 10
pm.max_spare_servers = 30

I restarted NGINX and PHP

nginx -t
nginx -s reload
/etc/init.d/nginx restart
service php7.2-fpm restart

I tweaked WordPress max memory limits

define( 'WP_MEMORY_LIMIT','3072M');
define( 'WP_MAX_MEMORY_LIMIT','3072M');

MySQL Tweaks: I logged into MySQL

mysql -u root -p

I ran “SHOW GLOBAL STATUS” to view stats

mysql> SHOW GLOBAL STATUS;
+-----------------------------------------------+--------------------------------------------------+
| Variable_name                                 | Value                                            |
+-----------------------------------------------+--------------------------------------------------+
| Aborted_clients                               | 0                                                |
| Aborted_connects                              | 0                                                |
| Binlog_cache_disk_use                         | 0                                                |
| Binlog_cache_use                              | 0                                                |
| Binlog_stmt_cache_disk_use                    | 0                                                |
| Binlog_stmt_cache_use                         | 0                                                |
| Bytes_received                                | 3179986                                          |
| Bytes_sent                                    | 223872114                                        |
| Com_admin_commands                            | 0                                                |
| Com_assign_to_keycache                        | 0                                                |
| Com_alter_db                                  | 0                                                |
| Com_alter_db_upgrade                          | 0                                                |
| Com_alter_event                               | 0                                                |
| Com_alter_function                            | 0                                                |
| Com_alter_instance                            | 0                                                |
| Com_alter_procedure                           | 0                                                |
| Com_alter_server                              | 0                                                |
| Com_alter_table                               | 0                                                |
| Com_alter_tablespace                          | 0                                                |
| Com_alter_user                                | 0                                                |
| Com_analyze                                   | 0                                                |
| Com_begin                                     | 0                                                |
| Com_binlog                                    | 0                                                |
| Com_call_procedure                            | 0                                                |
| Com_change_db                                 | 284                                              |
| Com_change_master                             | 0                                                |
| Com_change_repl_filter                        | 0                                                |
| Com_check                                     | 0                                                |
| Com_checksum                                  | 0                                                |
| Com_commit                                    | 0                                                |
| Com_create_db                                 | 0                                                |
| Com_create_event                              | 0                                                |
| Com_create_function                           | 0                                                |
| Com_create_index                              | 0                                                |
| Com_create_procedure                          | 0                                                |
| Com_create_server                             | 0                                                |
| Com_create_table                              | 0                                                |
| Com_create_trigger                            | 0                                                |
| Com_create_udf                                | 0                                                |
| Com_create_user                               | 0                                                |
| Com_create_view                               | 0                                                |
| Com_dealloc_sql                               | 0                                                |
| Com_delete                                    | 18                                               |
| Com_delete_multi                              | 0                                                |
| Com_do                                        | 0                                                |
| Com_drop_db                                   | 0                                                |
| Com_drop_event                                | 0                                                |
| Com_drop_function                             | 0                                                |
| Com_drop_index                                | 0                                                |
| Com_drop_procedure                            | 0                                                |
| Com_drop_server                               | 0                                                |
| Com_drop_table                                | 0                                                |
| Com_drop_trigger                              | 0                                                |
| Com_drop_user                                 | 0                                                |
| Com_drop_view                                 | 0                                                |
| Com_empty_query                               | 0                                                |
| Com_execute_sql                               | 0                                                |
| Com_explain_other                             | 0                                                |
| Com_flush                                     | 0                                                |
| Com_get_diagnostics                           | 0                                                |
| Com_grant                                     | 0                                                |
| Com_ha_close                                  | 0                                                |
| Com_ha_open                                   | 0                                                |
| Com_ha_read                                   | 0                                                |
| Com_help                                      | 0                                                |
| Com_insert                                    | 342                                              |
| Com_insert_select                             | 0                                                |
| Com_install_plugin                            | 0                                                |
| Com_kill                                      | 0                                                |
| Com_load                                      | 0                                                |
| Com_lock_tables                               | 0                                                |
| Com_optimize                                  | 0                                                |
| Com_preload_keys                              | 0                                                |
| Com_prepare_sql                               | 0                                                |
| Com_purge                                     | 0                                                |
| Com_purge_before_date                         | 0                                                |
| Com_release_savepoint                         | 0                                                |
| Com_rename_table                              | 0                                                |
| Com_rename_user                               | 0                                                |
| Com_repair                                    | 0                                                |
| Com_replace                                   | 0                                                |
| Com_replace_select                            | 0                                                |
| Com_reset                                     | 0                                                |
| Com_resignal                                  | 0                                                |
| Com_revoke                                    | 0                                                |
| Com_revoke_all                                | 0                                                |
| Com_rollback                                  | 0                                                |
| Com_rollback_to_savepoint                     | 0                                                |
| Com_savepoint                                 | 0                                                |
| Com_select                                    | 16358                                            |
| Com_set_option                                | 849                                              |
| Com_signal                                    | 0                                                |
| Com_show_binlog_events                        | 0                                                |
| Com_show_binlogs                              | 0                                                |
| Com_show_charsets                             | 0                                                |
| Com_show_collations                           | 0                                                |
| Com_show_create_db                            | 0                                                |
| Com_show_create_event                         | 0                                                |
| Com_show_create_func                          | 0                                                |
| Com_show_create_proc                          | 0                                                |
| Com_show_create_table                         | 0                                                |
| Com_show_create_trigger                       | 0                                                |
| Com_show_databases                            | 3                                                |
| Com_show_engine_logs                          | 0                                                |
| Com_show_engine_mutex                         | 0                                                |
| Com_show_engine_status                        | 0                                                |
| Com_show_events                               | 0                                                |
| Com_show_errors                               | 0                                                |
| Com_show_fields                               | 921                                              |
| Com_show_function_code                        | 0                                                |
| Com_show_function_status                      | 0                                                |
| Com_show_grants                               | 0                                                |
| Com_show_keys                                 | 1                                                |
| Com_show_master_status                        | 0                                                |
| Com_show_open_tables                          | 0                                                |
| Com_show_plugins                              | 0                                                |
| Com_show_privileges                           | 0                                                |
| Com_show_procedure_code                       | 0                                                |
| Com_show_procedure_status                     | 0                                                |
| Com_show_processlist                          | 0                                                |
| Com_show_profile                              | 0                                                |
| Com_show_profiles                             | 0                                                |
| Com_show_relaylog_events                      | 0                                                |
| Com_show_slave_hosts                          | 0                                                |
| Com_show_slave_status                         | 0                                                |
| Com_show_status                               | 6                                                |
| Com_show_storage_engines                      | 0                                                |
| Com_show_table_status                         | 0                                                |
| Com_show_tables                               | 2                                                |
| Com_show_triggers                             | 0                                                |
| Com_show_variables                            | 6                                                |
| Com_show_warnings                             | 1                                                |
| Com_show_create_user                          | 0                                                |
| Com_shutdown                                  | 0                                                |
| Com_slave_start                               | 0                                                |
| Com_slave_stop                                | 0                                                |
| Com_group_replication_start                   | 0                                                |
| Com_group_replication_stop                    | 0                                                |
| Com_stmt_execute                              | 4                                                |
| Com_stmt_close                                | 4                                                |
| Com_stmt_fetch                                | 0                                                |
| Com_stmt_prepare                              | 4                                                |
| Com_stmt_reset                                | 0                                                |
| Com_stmt_send_long_data                       | 4                                                |
| Com_truncate                                  | 2                                                |
| Com_uninstall_plugin                          | 0                                                |
| Com_unlock_tables                             | 0                                                |
| Com_update                                    | 70                                               |
| Com_update_multi                              | 0                                                |
| Com_xa_commit                                 | 0                                                |
| Com_xa_end                                    | 0                                                |
| Com_xa_prepare                                | 0                                                |
| Com_xa_recover                                | 0                                                |
| Com_xa_rollback                               | 0                                                |
| Com_xa_start                                  | 0                                                |
| Com_stmt_reprepare                            | 0                                                |
| Connection_errors_accept                      | 0                                                |
| Connection_errors_internal                    | 0                                                |
| Connection_errors_max_connections             | 0                                                |
| Connection_errors_peer_address                | 0                                                |
| Connection_errors_select                      | 0                                                |
| Connection_errors_tcpwrap                     | 0                                                |
| Connections                                   | 292                                              |
| Created_tmp_disk_tables                       | 1124                                             |
| Created_tmp_files                             | 30                                               |
| Created_tmp_tables                            | 1369                                             |
| Delayed_errors                                | 0                                                |
| Delayed_insert_threads                        | 0                                                |
| Delayed_writes                                | 0                                                |
| Flush_commands                                | 1                                                |
| Handler_commit                                | 6094                                             |
| Handler_delete                                | 33                                               |
| Handler_discover                              | 0                                                |
| Handler_external_lock                         | 38571                                            |
| Handler_mrr_init                              | 0                                                |
| Handler_prepare                               | 0                                                |
| Handler_read_first                            | 2299                                             |
| Handler_read_key                              | 134761                                           |
| Handler_read_last                             | 237                                              |
| Handler_read_next                             | 310119                                           |
| Handler_read_prev                             | 2733                                             |
| Handler_read_rnd                              | 222350                                           |
| Handler_read_rnd_next                         | 472820                                           |
| Handler_rollback                              | 0                                                |
| Handler_savepoint                             | 0                                                |
| Handler_savepoint_rollback                    | 0                                                |
| Handler_update                                | 15605                                            |
| Handler_write                                 | 17310                                            |
| Innodb_buffer_pool_dump_status                | Dumping of buffer pool not started               |
| Innodb_buffer_pool_load_status                | Buffer pool(s) load completed at 180624 23:38:01 |
| Innodb_buffer_pool_resize_status              |                                                  |
| Innodb_buffer_pool_pages_data                 | 1035                                             |
| Innodb_buffer_pool_bytes_data                 | 16957440                                         |
| Innodb_buffer_pool_pages_dirty                | 0                                                |
| Innodb_buffer_pool_bytes_dirty                | 0                                                |
| Innodb_buffer_pool_pages_flushed              | 1936                                             |
| Innodb_buffer_pool_pages_free                 | 7144                                             |
| Innodb_buffer_pool_pages_misc                 | 13                                               |
| Innodb_buffer_pool_pages_total                | 8192                                             |
| Innodb_buffer_pool_read_ahead_rnd             | 0                                                |
| Innodb_buffer_pool_read_ahead                 | 0                                                |
| Innodb_buffer_pool_read_ahead_evicted         | 0                                                |
| Innodb_buffer_pool_read_requests              | 306665                                           |
| Innodb_buffer_pool_reads                      | 950                                              |
| Innodb_buffer_pool_wait_free                  | 0                                                |
| Innodb_buffer_pool_write_requests             | 26509                                            |
| Innodb_data_fsyncs                            | 1229                                             |
| Innodb_data_pending_fsyncs                    | 0                                                |
| Innodb_data_pending_reads                     | 0                                                |
| Innodb_data_pending_writes                    | 0                                                |
| Innodb_data_read                              | 16273920                                         |
| Innodb_data_reads                             | 1078                                             |
| Innodb_data_writes                            | 2857                                             |
| Innodb_data_written                           | 53379584                                         |
| Innodb_dblwr_pages_written                    | 1275                                             |
| Innodb_dblwr_writes                           | 109                                              |
| Innodb_log_waits                              | 0                                                |
| Innodb_log_write_requests                     | 450                                              |
| Innodb_log_writes                             | 585                                              |
| Innodb_os_log_fsyncs                          | 793                                              |
| Innodb_os_log_pending_fsyncs                  | 0                                                |
| Innodb_os_log_pending_writes                  | 0                                                |
| Innodb_os_log_written                         | 664064                                           |
| Innodb_page_size                              | 16384                                            |
| Innodb_pages_created                          | 56                                               |
| Innodb_pages_read                             | 988                                              |
| Innodb_pages_written                          | 1936                                             |
| Innodb_row_lock_current_waits                 | 0                                                |
| Innodb_row_lock_time                          | 0                                                |
| Innodb_row_lock_time_avg                      | 0                                                |
| Innodb_row_lock_time_max                      | 0                                                |
| Innodb_row_lock_waits                         | 0                                                |
| Innodb_rows_deleted                           | 2                                                |
| Innodb_rows_inserted                          | 19219                                            |
| Innodb_rows_read                              | 249102                                           |
| Innodb_rows_updated                           | 77                                               |
| Innodb_num_open_files                         | 81                                               |
| Innodb_truncated_status_writes                | 0                                                |
| Innodb_available_undo_logs                    | 128                                              |
| Key_blocks_not_flushed                        | 0                                                |
| Key_blocks_unused                             | 12751                                            |
| Key_blocks_used                               | 645                                              |
| Key_read_requests                             | 321877                                           |
| Key_reads                                     | 648                                              |
| Key_write_requests                            | 196                                              |
| Key_writes                                    | 150                                              |
| Locked_connects                               | 0                                                |
| Max_execution_time_exceeded                   | 0                                                |
| Max_execution_time_set                        | 0                                                |
| Max_execution_time_set_failed                 | 0                                                |
| Max_used_connections                          | 3                                                |
| Max_used_connections_time                     | 2018-06-24 23:43:48                              |
| Not_flushed_delayed_rows                      | 0                                                |
| Ongoing_anonymous_transaction_count           | 0                                                |
| Open_files                                    | 229                                              |
| Open_streams                                  | 0                                                |
| Open_table_definitions                        | 206                                              |
| Open_tables                                   | 786                                              |
| Opened_files                                  | 502                                              |
| Opened_table_definitions                      | 208                                              |
| Opened_tables                                 | 817                                              |
| Performance_schema_accounts_lost              | 0                                                |
| Performance_schema_cond_classes_lost          | 0                                                |
| Performance_schema_cond_instances_lost        | 0                                                |
| Performance_schema_digest_lost                | 0                                                |
| Performance_schema_file_classes_lost          | 0                                                |
| Performance_schema_file_handles_lost          | 0                                                |
| Performance_schema_file_instances_lost        | 0                                                |
| Performance_schema_hosts_lost                 | 0                                                |
| Performance_schema_index_stat_lost            | 0                                                |
| Performance_schema_locker_lost                | 0                                                |
| Performance_schema_memory_classes_lost        | 0                                                |
| Performance_schema_metadata_lock_lost         | 0                                                |
| Performance_schema_mutex_classes_lost         | 0                                                |
| Performance_schema_mutex_instances_lost       | 0                                                |
| Performance_schema_nested_statement_lost      | 0                                                |
| Performance_schema_prepared_statements_lost   | 0                                                |
| Performance_schema_program_lost               | 0                                                |
| Performance_schema_rwlock_classes_lost        | 0                                                |
| Performance_schema_rwlock_instances_lost      | 0                                                |
| Performance_schema_session_connect_attrs_lost | 0                                                |
| Performance_schema_socket_classes_lost        | 0                                                |
| Performance_schema_socket_instances_lost      | 0                                                |
| Performance_schema_stage_classes_lost         | 0                                                |
| Performance_schema_statement_classes_lost     | 0                                                |
| Performance_schema_table_handles_lost         | 0                                                |
| Performance_schema_table_instances_lost       | 0                                                |
| Performance_schema_table_lock_stat_lost       | 0                                                |
| Performance_schema_thread_classes_lost        | 0                                                |
| Performance_schema_thread_instances_lost      | 0                                                |
| Performance_schema_users_lost                 | 0                                                |
| Prepared_stmt_count                           | 0                                                |
| Qcache_free_blocks                            | 1                                                |
| Qcache_free_memory                            | 16760152                                         |
| Qcache_hits                                   | 0                                                |
| Qcache_inserts                                | 0                                                |
| Qcache_lowmem_prunes                          | 0                                                |
| Qcache_not_cached                             | 16355                                            |
| Qcache_queries_in_cache                       | 0                                                |
| Qcache_total_blocks                           | 1                                                |
| Queries                                       | 19164                                            |
| Questions                                     | 19155                                            |
| Select_full_join                              | 0                                                |
| Select_full_range_join                        | 0                                                |
| Select_range                                  | 2677                                             |
| Select_range_check                            | 0                                                |
| Select_scan                                   | 2098                                             |
| Slave_open_temp_tables                        | 0                                                |
| Slow_launch_threads                           | 0                                                |
| Slow_queries                                  | 0                                                |
| Sort_merge_passes                             | 12                                               |
| Sort_range                                    | 4859                                             |
| Sort_rows                                     | 244452                                           |
| Sort_scan                                     | 854                                              |
| Ssl_accept_renegotiates                       | 0                                                |
| Ssl_accepts                                   | 0                                                |
| Ssl_callback_cache_hits                       | 0                                                |
| Ssl_cipher                                    |                                                  |
| Ssl_cipher_list                               |                                                  |
| Ssl_client_connects                           | 0                                                |
| Ssl_connect_renegotiates                      | 0                                                |
| Ssl_ctx_verify_depth                          | 0                                                |
| Ssl_ctx_verify_mode                           | 0                                                |
| Ssl_default_timeout                           | 0                                                |
| Ssl_finished_accepts                          | 0                                                |
| Ssl_finished_connects                         | 0                                                |
| Ssl_server_not_after                          |                                                  |
| Ssl_server_not_before                         |                                                  |
| Ssl_session_cache_hits                        | 0                                                |
| Ssl_session_cache_misses                      | 0                                                |
| Ssl_session_cache_mode                        | NONE                                             |
| Ssl_session_cache_overflows                   | 0                                                |
| Ssl_session_cache_size                        | 0                                                |
| Ssl_session_cache_timeouts                    | 0                                                |
| Ssl_sessions_reused                           | 0                                                |
| Ssl_used_session_cache_entries                | 0                                                |
| Ssl_verify_depth                              | 0                                                |
| Ssl_verify_mode                               | 0                                                |
| Ssl_version                                   |                                                  |
| Table_locks_immediate                         | 11962                                            |
| Table_locks_waited                            | 0                                                |
| Table_open_cache_hits                         | 19395                                            |
| Table_open_cache_misses                       | 817                                              |
| Table_open_cache_overflows                    | 12                                               |
| Tc_log_max_pages_used                         | 0                                                |
| Tc_log_page_size                              | 0                                                |
| Tc_log_page_waits                             | 0                                                |
| Threads_cached                                | 2                                                |
| Threads_connected                             | 1                                                |
| Threads_created                               | 3                                                |
| Threads_running                               | 1                                                |
| Uptime                                        | 2944                                             |
| Uptime_since_flush_status                     | 2944                                             |
+-----------------------------------------------+--------------------------------------------------+
353 rows in set (0.00 sec)

Read more on SHOW GLOBAL STATUS here. Read more on the values here.

I can see NO major errors here (possibly due to UpClouds awesome disk IO) so I won’t be making memory tweaks in MySQL. Sign Up using this link and get $25 credit free on UpCloud and see for yourself how fast they are.

Configure Ubuntu System Memory Usage

Edit /etc/sysctl.conf

Add the following to allow things to sit in ram longer

vm.vfs_cache_pressure=50

Snip from: https://www.kernel.org/doc/Documentation/sysctl/vm.txt

This percentage value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects.

At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.

Increasing vfs_cache_pressure significantly beyond 100 may have negative performance impact. Reclaim code needs to take various locks to find freeable directory and inode objects. With vfs_cache_pressure=1000, it will look for ten times more freeable objects than there are."

Read these pages here and here regarding setting MySQL memory.

Reboot

shutdown -r now

Resize the disk

The 2x CPU, 4GB memory plan comes with 80GB storage allowance.  My disk at present is 50GB and I will update the size soon following this guide soon.

Upgrade disk from 50gb to 80GB soon

Quick Benchmark

I used loader.io to load 500 users to access my site in 1 minute.

HTOP showing 2x busy CPU's running at 60%

The benchmark worked with no errors.

Loader.io Success with 500 concurrent users

This benchmark was performed with no Cloudflare caching. I should get Cloudflare caching working again to lower the average response time. I loaded my website manually in Google Chrome while loader.io threw 500 users at my site and it loaded very fast.

Conclusion

After a few days, I checked my memory logs and there were no low memory triggers (just normal internal memory management triggers). Ubuntu was happier.

No Low memory low triggers

This graph was taken before I set “vm.vfs_cache_pressure” so I will update this graph in a few days.

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

1.0 Initial Post

Filed Under: Backup, Cloud, DB, Domain, GUI, NGINX, Performance, PHP, Scalability, Server, Ubuntu, UpCloud, Upgrade VM Tagged With: add, an, and, cpu, Disk, memory, more, on, server, Space, to, ubuntu, UpCloud, Upgrading

How to use the UpCloud API to manage your UpCloud servers

June 17, 2018 by Simon

How to use the UpCloud API to manage your UpCloud servers.

If you have not read my previous posts I have now moved my blog etc to the awesome UpCloud host. Sign up using this link to get $25 free credit.

I recently compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here).

Here is my blog post on moving from Vultr to UpCloud.

Spoiler: UpCloud performance is great.

Upcloud Site Speed in GTMetrix

I have never had an UpCloud page load take longer than 2 seconds since moving.

UpCloud API

UpCloud has an API that we can opt into to using where we can manage servers. Read the official UpCloud API documentation here.

The API allows you to control:

  • Accounts
  • Pricing
  • Zones
  • Timezones
  • Plans
  • Servers
  • Storages
  • IP-Addresses
  • Firewall
  • Tags
  • etc

Create a sub-account to query the API

You should create a new user account (in the UpCloud dashboard) just for API access. I created two accounts for use on my server and on my home laptop and my server (and set a limiting IP(s) that can access it).

Create a Sub Account for API Access

Login to your UpCloud account (create an account here and get $25 free credit),

  1. Click My Accounts,
  2. Click User Accounts,
  3. Click Change on your user and enable API connections.
  4. TIP: Set up an IP rule to limit access to your API for security (I set up a VPN to get a static IP on my dynamic IP Internet host at home)).
  5. Save the changes

Enable API Connections

TIP: Lockdown the account to have the minimum permissions required.

e.g

  • Disable access to the control panel (Untick).
  • Allow API Connections (Tick) and specify an IP
  • Disable access to billing contact (Untick).
  • Disable access to billing section in the control panel (Untick).
  • Disable allowing of emails to billing contact (Untick).
  • Allow or Remove access to all server (or manually add access to desired servers)
  • Allow or Remove access to modify storage (or manually allow or remove access to desired storage)
  • etc

Lock down the account to the minimum needed

Save the account.

Now let’s make our first API call

I use OSX and I use the awesome Paw API testing tool from https://paw.cloud (This is not a plug, they are awesome). Postman is a popular API testing tool too. Any good programing language or CLI will allow you to send API requests.

First, let’s prepare the authorization string (this is a Base64 encoded combination of your username and password) read more here.

  1. Head over to https://www.base64encode.org/
  2. Click the Encode tab
  3. Add your “username:password” (without the quotes).
  4. Click Encode

A Base64 string will be outputted 🙂

e.g > eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk

fyi

You can encode also Encode and Decode Base64 from the Ubuntu Command line

Encode Base64 from the CLI Sample

echo -n 'yourapiusername:yoursupersecurepassword' | base64
eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk

Decode Base64 from the CLI Sample

echo `echo eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk | base64 --decode`
yourapiusername:yoursupersecurepassword

Now we can add an “Authorization Basic” token to the API request in Paw.

Authorization Header added with my base64 token.

A quick test of the UpCloud Prices API endpoint https://api.upcloud.com/1.2/price reveals the API is working.

Add Authorization Token

I can now see a full breakdown of my service prices in JSON 🙂

Query My Account

OK, Let’s see how much credit I have left by querying the https://api.upcloud.com/1.2/account, I duplicated the item in Paw and changed the URL to https://api.upcloud.com/1.2/account but no data returned?

I had to enable “Access to Billing section in Control Panel” for the user before this data returned from the API (make sense).

> HTTP/1.1 200 OK

Query (GET)

GET /1.2/account HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic *******************************************

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:23:32 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 91
Server: Apache

{
   "account" : {
      "credits" : 2500.00,
      "username" : "yourapiusername"
   }
}

“2500.00” = cents ($25)

Query All of Your Servers

Ok, Let’s get server information by querying https://api.upcloud.com/1.2/server

Query (GET)

GET /1.2/server HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:32:22 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 1154
Server: Apache

{
   "servers" : {
      "server" : [
         {
            "core_number" : "1",
            "hostname" : "server1nameredacted.com",
            "license" : 0,
            "memory_amount" : "2048",
            "plan" : "1xCPU-2GB",
            "plan_ipv4_bytes" : "3472464313",
            "plan_ipv6_bytes" : "166293599",
            "state" : "started",
            "tags" : {
               "tag" : [
                  "tag1"
               ]
            },
            "title" : "server1nameredacted.com",
            "uuid" : "########-####-####-####-############",
            "zone" : "us-chi1"
         },
         {
            "core_number" : "1",
            "hostname" : "server2nameredacted.com",
            "license" : 0,
            "memory_amount" : "1024",
            "plan" : "1xCPU-1GB",
            "plan_ipv4_bytes" : "198911",
            "plan_ipv6_bytes" : "19742",
            "state" : "started",
            "tags" : {
               "tag" : [
                  "tag2"
               ]
            },
            "title" : "server1nameredacted.com",
            "uuid" : "########-####-####-####-############",
            "zone" : "us-chi1"
         }
      ]
   }
}

Query Server Information

I have redated the UUID’s for my servers but once you know them you can query them by hitting https://api.upcloud.com/1.2/server/########-####-####-####-############

Query (GET)

GET /1.2/server/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:45:14 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 1656
Server: Apache

{
   "server" : {
      "boot_order" : "cdrom,disk",
      "core_number" : "1",
      "firewall" : "on",
      "host" : redacted,
      "hostname" : "server1nameredacted.com",
      "ip_addresses" : {
         "ip_address" : [
            {
               "access" : "private",
               "address" : "##.#.#.###",
               "family" : "IPv4"
            },
            {
               "access" : "public",
               "address" : "###.###.###.###",
               "family" : "IPv4",
               "part_of_plan" : "yes"
            },
            {
               "access" : "public",
               "address" : "####:####:####:####:####:####:########",
               "family" : "IPv6"
            }
         ]
      },
      "license" : 0,
      "memory_amount" : "2048",
      "nic_model" : "virtio",
      "plan" : "1xCPU-2GB",
      "plan_ipv4_bytes" : "3519033266",
      "plan_ipv6_bytes" : "168200052",
      "state" : "started",
      "storage_devices" : {
         "storage_device" : [
            {
               "address" : "virtio:0",
               "boot_disk" : "0",
               "part_of_plan" : "yes",
               "storage" : "########-####-####-####-############",
               "storage_size" : 50,
               "storage_title" : "system",
               "type" : "disk"
            }
         ]
      },
      "tags" : {
         "tag" : [
            "fearby"
         ]
      },
      "timezone" : "Australia/Sydney",
      "title" : "server1nameredacted.com",
      "uuid" : "########-####-####-####-############",
      "video_model" : "cirrus",
      "vnc" : "off",
      "vnc_password" : "#########################",
      "zone" : "us-chi1"
   }
}

The servers name, IPv4 and IPV6 network adapters are listed, CPU(s), Memory, Disk Sized and UUID’s are all visible 🙂

Surprisingly the VNC password is visible (enabling access to the root console).

TIP: Ensure your API account is safe and secure.

Query Storage Information

Now, Let’s query the storage with the GUID from above by querying https://api.upcloud.com/1.2/storage/########-####-####-####-############

Query (GET)

GET /1.2/storage/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic  ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:53:36 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 559
Server: Apache

{
   "storage" : {
      "access" : "private",
      "backup_rule" : {},
      "backups" : {
         "backup" : [
            "########-####-####-####-############"
         ]
      },
      "license" : 0,
      "part_of_plan" : "yes",
      "servers" : {
         "server" : [
            "########-####-####-####-############"
         ]
      },
      "size" : 50,
      "state" : "online",
      "tier" : "maxiops",
      "title" : "system",
      "type" : "normal",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

I can see information about the storage’s assigned server and backups 🙂

Query Backup Information

Backup storage can be queried with the same storge API endpoint https://api.upcloud.com/1.2/storage/########-####-####-####-############

Query (GET)

GET /1.2/storage/014fd483-ea90-4055-b445-bf2011951999 HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 05:01:11 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 412
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-16T04:47:56Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "online",
      "title" : "On-Demand Backup",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Rename Backup

One thing that I would like to be able to do is to rename on-demand backups in the UpCloud dashboard (this is not a feature yet) but I can rename manual backup’s in the API though 🙂

Boring “On-Demand Backup” label.

Rename Backups Not possible in the GUI

I tried sending JSON to https://api.upcloud.com/1.2/storage/########-####-####-####-############ to rename a backup but kept getting an error?

JSON

{
> “storage”: {
> “title”: “Latest manual backup , Working NGINX, PHP, MySQL w Tweaks”,
> “size”: “50”
> }
> }

Result

> “error_code” : “CONTENT_TYPE_INVALID”,
> “error_message” : “The Content-Type header has an invalid value.”

I googled and found an old manual for UpClouds API (official support here).

I added these missing content-type headers (108 was the length in chars of the payload)

> Content-Type: application/json; Charset=UTF-8'
> Content-Length: 108

Still no go?

I think the content-length value is wrong, more here.

I fixed it, it turned out I had a semicolon in the Content-Type value. The JSON RFC always assumes that Content-Type is UTF8 encoded (more here).

This Fails

Content-Type: application/json; charset=utf-8

This Works

Content-Type: application/json

Now I can rename my Backup (storage). I manually calculated the length of the JSON payload and added a “Content-Length” header and value.

Query (PUT)

PUT /1.2/storage/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Content-Type: application/json
Content-Length: 113
Authorization: Basic ##############base64hash##############

{"storage":{"size":"50","title":"Latest manual backup , Working NGINX, PHP, MySQL w Tweaks"}}

Output

HTTP/1.1 202 ACCEPTED
Date: Sun, 17 Jun 2018 05:47:02 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 453
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-16T04:47:56Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "online",
      "title" : "Latest manual backup , Working NGINX, PHP, MySQL w Tweaks",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Success 🙂

Backup Renamed

Create a Backup

Backups can be performed with a “/backup” added to the end of the query string.

Query (POST)

POST /1.2/storage/########-####-####-####-############/backup HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Content-Type: application/json
Content-Length: 100
Authorization: Basic ##############base64hash##############

{
  "storage": {
    "title": "Sunday 17th Latest backup , Working NGINX, PHP, MySQL w Tweaks"
  }
}

Output

HTTP/1.1 201 CREATED
Date: Sun, 17 Jun 2018 06:17:35 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 487
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-17T06:17:35Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "progress" : "0",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "maintenance",
      "title" : "Sunday 17th Latest backup , Working NGINX, PHP, MySQL w Tweaks",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Success (UpCloud GUI)

Conclusion

UpCloud does have great API docs.

I can easily integrate this into bash scripts to manage my servers via API and a future Java app for managing servers.

Paw does give CURL output to allow me to copy working API’s for use in BASH 🙂

More to come

  1. BASH Script to Deploy and configure a server on UpCloud via Initialization scripts (or manual) (1 week)
  2. JAVA App to manage your server (3 months)

If you are signing up for UpCloud please consider using my referral code and get $25 credit for free.

Read my setup guide here.

https://www.upcloud.com/register/?promo=D84793

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.1 updated typo

v1.0 Initial Post.

Filed Under: API, Backup, Cloud, Linux, Networking, Restore, UpCloud, VM Tagged With: api, How, Manage, servers, the, to, UpCloud, use, your

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT