• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Linux

HomePi – Raspberry PI powered touch screen showing information from house-wide sensors

March 14, 2022 by Simon

This post is a work in progress (14/3/2022, v0.9.63 – PCB’s v0.2 Designed and Ordered

Summary

After watching this video from Jeff Geerling (demonstrating how to build a Air Quality Sensor) I have decided to make 2. but why not build something bigger?

I want to make a RaspBerry Pi server with a touch screen to receive data from a dozen other WeMos Sensors that I will build.

The Plan

Below is a rough plan of what I am building

In a nutshell, it will be 20x WeMos Sensors recording

Picture of20x WeMoss Sensors, weather station and co2 sensors talking to an api that saves to MySQL then mysql being ready buy a webpage and touch screen panel

I ordered all the parts from Amazon, BangGood, AliExpress, eBay, Core Electronics and Kogan.

Fresh Bullseye Install (Buster upgrade failed)

On 21/11/2021 I tried to manually update Buster to Bullseye (without taking a backup first (bad idea)). I followed this guide to reinstall Rasbian from scratch (this with Bullseye)

Storage Type

Before I begin I need to decide on what storage media to use on the Raspberry Pi. I hate how unreliable and slow MicroSD cards. I tried using an old 128GB SATA SSD, a 1TB Magnetic Hard Drive, a SATA M.2 SSD and NVME M.2 in a USB caddy.

I decided to use a spare 250GB SATA based M.2 Solid State from my son’s PC in Geekworm X862 SATA M.21 Expansion board.

With this board I can bolt the M.2 Solid State Drive into a expansion board under the pi and Power it from the RaspBerry Pi USB Port.

Nice and tidy

I zip-tied a fan to the side of the boards to add a little extra airflow over the solid state drive

32Bit, 64Bit, Linux or Windows

Before I begin I set up Raspbian on an empty Micro SD card (just to boot it up and flash the firmware to the latest version). This is very easy and documented elsewhere. I needed the latest firmware to ensure boort from USB Drive (not Micro SD card was working).

I ran rpi-update and flashed the latest firmware onto my Raspberry Pi. Good, write up here.

When my Raspberry Pi had the latest firmware I used the Raspberry Pi Imager to install the 32 Bit Raspberry Pi OS.

I do have a 8GB Raspberry Pi 4 B, 64Bit Operating Systems do exist but I stuck with 32 bit for compatibility.

Ubuntu 64bit for Raspberry Pi Links

  • Install Ubuntu on a Raspberry Pi | Ubuntu
    • Server Setup Links
      • How to install Ubuntu Server on your Raspberry Pi | Ubuntu
    • Desktop Setup Links
      • How to install Ubuntu Desktop on Raspberry Pi 4 | Ubuntu

Windows 10 for Raspberry Pi Links
https://docs.microsoft.com/en-us/windows/iot-core/tutorials/quickstarter/prototypeboards
https://docs.microsoft.com/en-us/answers/questions/492917/how-to-install-windows-10-iot-core-on-raspberry-pi.html
https://docs.microsoft.com/en-us/windows/iot/iot-enterprise/getting_started

Windows 11 for Raspberry Pi Links
https://www.youtube.com/user/leepspvideo
https://www.youtube.com/watch?v=WqFr56oohCE
https://www.worproject.ml

Setting up the Raspberry Pi Server

These are the steps I used to setup my Pi

Dedicated IP

Before I began I ran ifconfig on my Pi to obtain my Raspberry Pi’s wireless cards mac address. I logged into my Router and setup a dedicated IP (192.168.0.50), this way I can have a IP address thta remains the same.

Hostname

I set my hostname here

sudo nano /etc/hosts
sudo nano /etc/hostname

I verified my hostname with this command

hostname

I verified my IP address with this command

hostname -I

Samba Share

I setup the Samba service to allow me to copy files to and from the Pi

sudo apt-get install samba samba-common-bin
sudo apt-get update

I made a folder to share files

 mkdir ~/share

I edited the Samba config file

sudo nano /etc/samba/smb.conf

In the config file I set my workgroup settings


workgroup = Hyrule
wins support = yes

I defined a share at the bottom of the config file (and saved)

[PiShare]
comment=Raspberry Pi Share
path=/home/pi/share
browseable=Yes
writeable=Yes
only guest=no
create mask=0777
directory mask=0777
public=no

I set a smb password

sudo smbpasswd -a pi
New SMB password: ********
Retype new SMB password: ********

I tested the share froma Windows PC

And the share is accessible on the Raspberry Pi

Great, now I can share files with drag and drop (instead of via SCP)

Mono

I know how to code C# Windows Executables, I have 25 years experince. I do nt want to learn Java or Python to code a GUI application for a touch screen if possible.

I setup Mono from Home | Mono (mono-project.com) to be anbe to run Windows C# EXE’s on Rasbian

sudo apt install apt-transport-https dirmngr gnupg ca-certificates

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb https://download.mono-project.com/repo/debian stable-raspbianbuster main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list

sudo apt update

sudo apt install mono-devel

I copied an EXE I wrote in C# on Windows and ran it with Mono

sudo mono ~/HelloWorld.exe
Exe Test OK

This worked.

Nginx Web Server

I Installed NginX and configured it

sudo apt-get install nginx

I created a /www folder for nginx

sudo mkdir /www

I created a place-holder file in the www root

sudo nano /wwww/index.html

I set permissions to allow Nginx to access /www

sudo chown -R www-data:www-data /www

I edited the NginX config as required

sudo nano /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/nginx.conf 

I tested and reloaded the nginx config


sudo nginx -t
sudo nginx -s reload
sudo systemctl start nginx

I started NginX

sudo systemctl start nginx

I tested nginx in a web browser

NodeJS/NPM

I installed NodeJS

sudo apt update
sudo apt install nodejs npm -y

I verified Node was installed

nodejs --version
> v12.22.5

PHP

I installed PHP

sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update

sudo apt install -y php8.0-common php8.0-cli php8.0-xml

I verified PHP with this command

php --version

> PHP 8.0.13 (cli) (built: Nov 19 2021 06:40:53) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.13, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.13, Copyright (c), by Zend Technologies

I installed PHP-FPM

sudo apt-get install php8.0-fpm

I verified the PHP FPM sock was available before adding it to the NGINX Config

sudo ls /var/run/php*/**.sock
> /var/run/php/php8.0-fpm.sock  /var/run/php/php-fpm.sock

I reviewed PHP Settings

sudo nano /etc/php/8.0/cli/php.ini 
sudo nano /etc/php/8.0/fpm/php.ini

I created a /www/ppp.php file with this contents

<?php
  phpinfo(); // all info
  // module info, phpinfo(8) identical
  phpinfo(INFO_MODULES);
?>

PHP is working

PHP Test OK

I changed these php.ini settings (fine for local development).

max_input_vars = 1000
memory_limit = 1024M
max_file_uploads = 20M
post_max_size = 20M
display_errors = on

MySQL Database

I installed MariaDB

sudo apt install mariadb-server

I updated my pi password

passwd

I ran the Secure MariaDB Program

sudo mysql_secure_installation

After setting each setting I want to run mysql as root to test mysql

PHPMyAdmin

I installed phpmyadmin to be able to edit mysql databases via the web

I followed this guide to setup phpmyadmin via lighthttp and then via nginx

I then logged into MySQL, set user permissions, create a test database and changes settings as required.

NginX to NodeJS API Proxy

I edited my NginX config to create a NodeAPI Proxy

Test Webpage/API

Todo

I installed PM2 the NodeJS agent software

sudo npm install -g pm2 

Node apps can be started as a service from cli

pm2 start api_v1.js

PM2 status

pm2 status

You can delete node apps from PM2 (if desired)

pm2 delete api_v1.js

Sending Email from CLI

I setup send email to allow emails to be sent from the cli with these commands

 sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail  

I logged into my GSuite account and setup an alias and app password to use.

Now I can send emails from the CLI

sudo sendemail -f [email protected] -t [email protected] -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp **************

I added this to a Bash script (“/Scripts/Up.sh”) and added an event to send an email every 6 hours

7 Inch Full View LCD IPS Touch Screen 1024*600 

I purchased a 7″ Touch screen from Banggood. I got a head up from the following Video.

I plugged in the touch USB cable to my Pi’s USB3 port. I pliugged the HDMI adapter into the screen and the pi (with the supplied mini plug).

I turned on the pi and it work’s and looks amazing.

This is displaying a demo C# app I wrote. It’s running via mono.

I did have to add the following to config.txt to bet native resolution. The manual on the supplied CD was helpful (but I did not check it at first).

max_usb_current=1
hdmi_force_hotplug=1
config_hdmi_boost=7
hdmi_group=2
hdmi_mode=1
hdmi_mode=87 
hdmi_drive=1
display_rotate=0
hdmi_cvt 1024 600 60 6 0 0 0
framebuffer_width=1024
framebuffer_height=600

PiJuice UPS HAT

I purchased an external LiPi UPS to keep the raspberry pi fed with power (even when the power goes out)

The stock battery was not charged and was quite weak when I first installed it. Do fully charge the battery before testing.

PiJuice

Stock Battery = 3.7V @ 1820mAh

Stock Battery = 3.7V @ 1820mAh

Below are screenshots so the PIJuice Setup.

PiJuice HAT Settings

PiJuice General Settings

General Settings

There is an option to set events for every button

Extensive screen to set button events

LED Status color and function

Set LED status and color

IO for the PiJuice Input. I will sort this out later.

PiJuice IO settings

A new firmware was available. I had v1.4

Update firmware screen

I updated the firmware

Firmware update worked

Firmware flash success

Battery settings

Battery Settings

PiJuice Button Config

Button config

Wake Up Alarm time and RTC

Clock Settings

System Settings

System Settings

System Events

system settings page

User Scripts

Define user scripts

I ordered a bigger battery as my Screen, M.2, Fan and UPS consume near the maximum of the stock battery.

10,000mAh battery

After talking with the seller of the battery they advised I setup the 10,000mAh battery using the 1,000mAh battery setup in PiJuice but change the Capacity and Charge Current

  • Capacity = 10000C
  •  cutoff voltage

And for battery longevity set the 

  • Cutoff voltage: 3,250mv

Final Battery Setup

Battery settings based off 1000mAh battery profile , Capacity 10,000 mAh, Charge current 850 and Cutoff 3250mV

WeMos Setup

I orderd 20x Wemos Mini D1 Pro (16Mbit) clones to use to run the sensors. I soldered the legs on in batches of 8

WeMos installed on breadboards ready to solder pins

Soldering was not perfect but worked

20x soldered wemos

Soldering is not perfect but each joint was triple tested.

Close up of soldered joints

I only had one dead WeMos.

I will set up the final units on ProtoBoards.

Protoboard

20x Wemos ready for service and the external aerial is glued down. The hot glue was a bad idea, I had to rotate a resistor under the hot glue.

20x wemos ready.

Revision

I ended up reordering the WeMos Mini’s and soldering on Female headers so I can add OLED screens

air mon enclosure

I added female headers to allow an OLED screen

new wemos

I purchased a microscope tpo be able to see better.

microscope

Each sensor will have a mini OLED screen.

mini oled screen

0.66″ OLED Screens

oled screen

I designed a PCB in Photoshop and had it turned into a PCB via https://www.fiverr.com/syedzamin12. I ordered 30x bloards from https://jlcpcb.com/

Custom PCB

The PCB’s fit inside the new enclosure perfectly

I am waiting for smaller screws to arrive.

PCB v0.2

I decided to design a board with 2 switches (and a light sensor to turn the screen off at night)

Breadboard Prototype

Prototype

I spoke to https://www.fiverr.com/syedzamin12 and withing 24 hours a PCB was designed

I Layers

This time I will get a purple PCB from JLCPCB and add a dinosaur for my son

Top PCB View

TOP PCB View

Back PCB View

Back PCB View

3D PC View

3D PCB view

JLCPCB made the board in 3 days

3 days

Now I need to wait a few weeks for the new PCB to arrive

Also, I finsihed the firmware for v0.2 PCB

I ordered some switches

I also ordered some reset buttons

I might add a larger 0.96″ OLED screen

Wifi and Static IP Test

I uploaded a skepch to each WeMos and tested the Wifi and Static IP thta was given.

Sketch

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>


#define SERVER_IP "192.168.0.50"

#ifndef STASSID
#define STASSID "wifi_ssid_name"
#define STAPSK  "************"
#endif

void setup() {

  Serial.begin(115200);

  Serial.println();
  Serial.println();
  Serial.println();

  WiFi.begin(STASSID, STAPSK);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected! IP address: ");
  Serial.println(WiFi.localIP());

}

void loop() {
  // wait for WiFi connection
  if ((WiFi.status() == WL_CONNECTED)) {

    WiFiClient client;
    HTTPClient http;

    Serial.print("[HTTP] begin...\n");
    // configure traged server and url
    http.begin(client, "http://" SERVER_IP "/api/v1/test"); //HTTP
    http.addHeader("Content-Type", "application/json");

    Serial.print("[HTTP] POST...\n");
    // start connection and send HTTP header and body
    int httpCode = http.POST("{\"hello\":\"world\"}");

    // httpCode will be negative on error
    if (httpCode > 0) {
      // HTTP header has been send and Server response header has been handled
      Serial.printf("[HTTP] POST... code: %d\n", httpCode);

      // file found at server
      if (httpCode == HTTP_CODE_OK) {
        const String& payload = http.getString();
        Serial.println("received payload:\n<<");
        Serial.println(payload);
        Serial.println(">>");
      }
    } else {
      Serial.printf("[HTTP] POST... failed, error: %s\n", http.errorToString(httpCode).c_str());
    }

    http.end();
  }

  delay(1000);
}

The Wemos booted, connected to WiFi, set and IP, and tried to post a request to a URL.

........................................................
Connected! IP address: 192.168.0.51
[HTTP] begin...
[HTTP] POST...
[HTTP] POST... failed, error: connection failed

The POST failed because my PI API Server was off.

Touch Screen Enclosure

I constructed a basic enclosure and screwed the touch screen to it. I need to find  aflexible black scrip to put around the screen and cover up the gaps.

Wooden box with the screen in it

The touch screen has been screwed in.

Screen screwed in

Over the Air Updating

I followed this guide and having the WeMos updatable over WiFi.

Basically, I installed the libraries “AsyncHTTPSRequest_Generic”, “AsyncElegantOTA”, “AsyncHTTPRequest_Generic”, “ESPAsyncTCP” and “ESPAsyncWebServer”.

Manage Libraries

A few libraries would not download so I manually downloaded the code from the GitHub repository from Confirm your account recovery settings (github.com) and then extracted them to my Documents\Arduino\libraries folder.

I then opened the exampel project “AsyncElegantOTA\ESP8266_Async_Demo”

I reviewed the code

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>

const char* ssid = "........";
const char* password = "........";

AsyncWebServer server(80);


void setup(void) {
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request) {
    request->send(200, "text/plain", "Hi! I am ESP8266.");
  });

  AsyncElegantOTA.begin(&server);    // Start ElegantOTA
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();
}
I added my Wifi SSID and password, saved the project and compiled a the code and wrote it to my WeMos Mini D1

I added LED Blink Code

void setup(void) {
  ...
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  ...
}
void loop(void) {
 ...
  delay(1000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(1000);                      // Wait for two seconds (to demonstrate the active low LED)
 ...
}

I compiled and tested the code

Now to get new code changes to the WeMos Mini via a binary, I edited the code (chnaged the LED blink speed) and clicked “Export Compiled Binary”

Compole Binary

When the binary compiled I opened the Sketch Folder

Show Sketch folder

I could see a bin file.

Bin File

I loaded the http://192.168.0.51/update and selected the bin file.

The new firmwaere applied.

Flashing

I navighated back to http://192.168.0.51

TIP: Ensure you add the starter sketch that has your wifi details in there.

Password Protection

I changed the code to add a basic passeord on access ad on OTA update

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>


//Saved Wifi Credentials (Research Encruption Later or store in FRAM Module?
const char* ssid = "your-wifi-ssid";
const char* password = "********";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(80);

void setup(void) {
  Serial.begin(115200);       //Serial Mode (Debug)
    
  WiFi.mode(WIFI_STA);        //Client Mode
  WiFi.begin(ssid, password); //Connect to Wifi
 
  Serial.println("");

  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success! ESP8266 #001.");
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);

  
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();

  digitalWrite(LED_BUILTIN, LOW);
  delay(8000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(8000);                      // Wait for two seconds (to demonstrate the active low LED)

}

Password prompt for users accessing the device.

Login scree

Password prompt for admin users accessing the device.

admin password protect

Later I will research encrypting the password and storing it on SPIFFS partition or a FRAM memory module.

Adding the DHT22 Sensors

I received my paxckl of DHT22 Sensors (AMT2302).

Specifications

  • Operating Voltage: 3.5V to 5.5V
  • Operating current: 0.3mA (measuring) 60uA (standby)
  • Output: Serial data
  • Temperature Range: 0°C to 50°C
  • Humidity Range: 20% to 90%
  • Resolution: Temperature and Humidity both are 16-bit
  • Accuracy: ±1°C and ±1%

I wired it up based on this Adafruit post.

DHT22 Wired Up on a breadboard.

DHT22 and Basic API Working

I will not bore you with hours or coding and debugging so here is my code thta

  • Allows the WeMos D1 Mini Prpo (ESP8266) to connect to WiFi
  • Web Server (with stats)
  • Admin page for OTA updates
  • Password Prpotects the main web folder and OTA admin page
  • Reading DHT Sensor values
  • Debug to serial Toggle
  • LED activity Toggle
  • Json Serialization
  • POST DHT22 data to an API on the Raspberry PI
  • Placeholder for API return values
  • Automatically posts data to the API ever 10 seconds
  • etc

Here is the work in progress ESP8288 Code

#include <ESP8266WiFi.h>        // https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/src/ESP8266WiFi.h
#include <ESPAsyncTCP.h>        // https://github.com/me-no-dev/ESPAsyncTCP
#include <ESPAsyncWebServer.h>  // https://github.com/me-no-dev/ESPAsyncWebServer
#include <AsyncElegantOTA.h>    // https://github.com/ayushsharma82/AsyncElegantOTA
#include <ArduinoJson.h>        // https://github.com/bblanchon/ArduinoJson
#include "DHT.h"                // https://github.com/adafruit/DHT-sensor-library
                                // Written by ladyada, public domain

//Todo: Add Authentication
//Fyi: https://api.gov.au/standards/national_api_standards/index.html

#include <ESP8266HTTPClient.h>  //POST Client

//Firmware Stats
bool bDEBUG = true;        //true = debug to Serial output
                           //false = no serial output
//Port Number for the Web Server
int WEB_PORT_NUMBER = 1337; 

//Post Sensor Data Delay
int POST_DATA_DELAY = 10000; 

bool bLEDS = true;         //true = Flash LED
                           //false =   NO LED's
//Device Variables
String sDeviceName = "ESP-002";
String sFirmwareVersion = "v0.1.0";
String sFirmwareDate = "27/10/2021 23:00";

String POST_SERVER_IP = "192.168.0.50";
String POST_SERVER_PORT = "";
String POST_ENDPOINT = "/api/v1/test";

//Saved Wifi Credentials (Research Encryption later and store in FRAM Module?
const char* ssid = "your_wifi_ssid";
const char* password = "***************";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(WEB_PORT_NUMBER);    //Feel free to chnage the port number

//DHT22 Temp Sensor
#define DHTTYPE DHT22   // DHT 22  (AM2302), AM2321
#define DHTPIN 5
DHT dht(DHTPIN, DHTTYPE);

//Common Variables
String thisBoard = ARDUINO_BOARD;
String sHumidity = "";
String sTempC = "";
String sTempF = "";
String sJSON = "{ }";

//DHT Variables
float h;
float t;
float f;
float hif;
float hic;


void setup(void) {

  //Turn On PIN
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  
  //Serial Mode (Debug)
  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  if (bDEBUG) Serial.begin(115200);
  if (bDEBUG) Serial.println("Serial Begin");

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }
  if (bDEBUG) Serial.println("Wifi Setup");
  if (bDEBUG) Serial.println(" - Client Mode");
  
  WiFi.mode(WIFI_STA);        //Client Mode
  
  if (bDEBUG) Serial.print(" - Connecting to Wifi: " + String(ssid));
  WiFi.begin(ssid, password); //Connect to Wifi
 
  if (bDEBUG) Serial.println("");
  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    if (bDEBUG) Serial.print(".");
  }
  if (bDEBUG) Serial.println("");
  if (bDEBUG) Serial.print("- Connected to ");
  if (bDEBUG) Serial.println(ssid);
  
  if (bDEBUG) Serial.print("IP address: ");
  if (bDEBUG) Serial.println(WiFi.localIP());

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  
  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    
        String sendHtml = "";
        sendHtml = sendHtml + "<html>\n";
        sendHtml = sendHtml + " <head>\n";
        sendHtml = sendHtml + " <title>ESP# 002</title>\n";
        sendHtml = sendHtml + " <meta http-equiv=\"refresh\" content=\"5\";>\n";
        sendHtml = sendHtml + " </head>\n";
        sendHtml = sendHtml + " <body>\n";
        sendHtml = sendHtml + " <h1>ESP# 002</h1>\n";
        sendHtml = sendHtml + " <u2>Debug</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Device Name: " + sDeviceName + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Version: " + sFirmwareVersion + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Date: " + sFirmwareDate + " </li>\n";
        sendHtml = sendHtml + " <li>Board: " + thisBoard + " </li>\n";
        sendHtml = sendHtml + " <li>Auto Refresh Root: On </li>\n";
        sendHtml = sendHtml + " <li>Web Port Number: " + String(WEB_PORT_NUMBER) +" </li>\n";
        sendHtml = sendHtml + " <li>Serial Debug: " + String(bDEBUG) +" </li>\n";
        sendHtml = sendHtml + " <li>Flash LED's Debug: " + String(bLEDS) +" </li>\n";
        sendHtml = sendHtml + " <li>SSID: " + String(ssid) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT TYPE: " + String(DHTTYPE) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT PIN: " + String(DHTPIN) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_DATA_DELAY: " + String(POST_DATA_DELAY) +" </li>\n";

        sendHtml = sendHtml + " <li>POST_SERVER_IP: " + String(POST_SERVER_IP) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_ENDPOINT: " + String(POST_ENDPOINT) +" </li>\n";
        
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>Sensor</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Humidity: " + sHumidity + "% </li>\n";
        sendHtml = sendHtml + " <li>Temp: " + sTempC + "c, " + sTempF + "f. </li>\n";
        sendHtml = sendHtml + " <li>Heat Index: " + String(hic) + "c, " + String(hif) + "f.</li>\n";
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>JSON</h2>";
        
        // Allocate the JSON document Object/Memory
        // Use https://arduinojson.org/v6/assistant to compute the capacity.
        StaticJsonDocument<250> doc;
        //JSON Values     
        doc["Name"] = sDeviceName;
        doc["humidity"] = sHumidity;
        doc["tempc"] = sTempC;
        doc["tempf"] = sTempF;
        doc["heatc"] = String(hic);
        doc["heatf"] = String(hif);
        
        sJSON = "";
        serializeJson(doc, sJSON);
        
        sendHtml = sendHtml + " <ul>" + sJSON + "</ul>\n";
        
        sendHtml = sendHtml + " <u2>Seed</h2>";
        long randNumber = random(100000, 1000000);
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <p>" + String(randNumber) + "</p>\n";
        sendHtml = sendHtml + " </ul>\n";
       
        sendHtml = sendHtml + " </body>\n";
        sendHtml = sendHtml + "</html>\n";
        //Send the HTML   
        request->send(200, "text/html", sendHtml);
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);
  
  server.begin();
  if (bDEBUG) Serial.println("HTTP server started");
 
  if (bDEBUG) Serial.println("Board: " + thisBoard);

  //Setup the DHT22 Object
  dht.begin();
  
}

void loop(void) {

  AsyncElegantOTA.loop();

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }


  //Display Temp and Humidity Data

  h = dht.readHumidity();
  t = dht.readTemperature();
  f = dht.readTemperature(true);

  // Check if any reads failed and exit early (to try again).
  if (isnan(h) || isnan(t) || isnan(f)) {
    if (bDEBUG) Serial.println(F("Failed to read from DHT sensor!"));
    return;
  }
  
  hif = dht.computeHeatIndex(f, h);         // Compute heat index in Fahrenheit (the default)
  hic = dht.computeHeatIndex(t, h, false);  // Compute heat index in Celsius (isFahreheit = false)

  if (bDEBUG) Serial.print(F("Humidity: "));
  if (bDEBUG) Serial.print(h);
  if (bDEBUG) Serial.print(F("%  Temperature: "));
  if (bDEBUG) Serial.print(t);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(f);
  if (bDEBUG) Serial.print(F("°F  Heat index: "));
  if (bDEBUG) Serial.print(hic);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(hif);
  if (bDEBUG) Serial.println(F("°F"));

  //Save for Page Load
  sHumidity = String(h,2);
  sTempC = String(t,2);
  sTempF = String(f,2);

  //Post to Pi API
    // Allocate the JSON document Object/Memory
    // Use https://arduinojson.org/v6/assistant to compute the capacity.
    StaticJsonDocument<250> doc;
    //JSON Values     
    doc["Name"] = sDeviceName;
    doc["humidity"] = sHumidity;
    doc["tempc"] = sTempC;
    doc["tempf"] = sTempF;
    doc["heatc"] = String(hic);
    doc["heatf"] = String(hif);
    
    sJSON = "";
    serializeJson(doc, sJSON);

    //Post to API
    if (bDEBUG) Serial.println(" -> POST TO API: " + sJSON);

   //Test POST
  
    if ((WiFi.status() == WL_CONNECTED)) {
  
      WiFiClient client;
      HTTPClient http;
  
    
      if (bDEBUG) Serial.println(" -> API Endpoint: http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT);
      http.begin(client, "http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT); //HTTP


      if (bDEBUG) Serial.println(" -> addHeader: \"Content-Type\", \"application/json\"");
      http.addHeader("Content-Type", "application/json");
  
      // start connection and send HTTP header and body
      int httpCode = http.POST(sJSON);
      if (bDEBUG) Serial.print("  -> Posted JSON: " + sJSON);
  
      // httpCode will be negative on error
      if (httpCode > 0) {
        // HTTP header has been send and Server response header has been handled

  
        //See https://api.gov.au/standards/national_api_standards/api-response.html 
        // Response from Server
        if (bDEBUG) Serial.println("  <- Return Code: " + httpCode);
                
        //Get the Payload
        const String& payload = http.getString();
          if (bDEBUG) Serial.println("   <- Received Payload:");
          if (bDEBUG) Serial.println(payload);
          if (bDEBUG) Serial.println("   <- Payload (httpcode: 201):");
          

         //Hnadle the HTTP Code
        if (httpCode == 200) {
          if (bDEBUG) Serial.println("  <- 200: Invalid API Call/Response Code");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 201) {
          if (bDEBUG) Serial.println("  <- 201: The resource was created. The Response Location HTTP header SHOULD be returned to indicate where the newly created resource is accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 202) {
          if (bDEBUG) Serial.println("  <- 202: Is used for asynchronous processing to indicate that the server has accepted the request but the result is not available yet. The Response Location HTTP header may be returned to indicate where the created resource will be accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 400) {
          if (bDEBUG) Serial.println("  <- 400: The server cannot process the request (such as malformed request syntax, size too large, invalid request message framing, or deceptive request routing, invalid values in the request) For example, the API requires a numerical identifier and the client sent a text value instead, the server will return this status code.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 401) {
          if (bDEBUG) Serial.println("  <- 401: The request could not be authenticated.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 403) {
          if (bDEBUG) Serial.println("  <- 403: The request was authenticated but is not authorised to access the resource.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 404) {
          if (bDEBUG) Serial.println("  <- 404: The resource was not found.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 415) {
          if (bDEBUG) Serial.println("  <- 415: This status code indicates that the server refuses to accept the request because the content type specified in the request is not supported by the server");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 422) {
          if (bDEBUG) Serial.println("  <- 422: This status code indicates that the server received the request but it did not fulfil the requirements of the back end. An example is a mandatory field was not provided in the payload.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 500) {
          if (bDEBUG) Serial.println("  <- 500: An internal server error. The response body may contain error messages.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }

        
      } else {
        if (bDEBUG) Serial.println("   <- Unknown Return Code (ERROR): " + httpCode);
        //if (bDEBUG) Serial.printf("    " + http.errorToString(httpCode).c_str());
        
      }

    }

    if (bDEBUG) Serial.print("\n\n");

    delay(POST_DATA_DELAY);
  }

Here is a screenshot of the Arduino IDE Serial Monitor debugging the code

Serial Monitor

Here is a screenshot of the NodeJS API on the raspberry Pi accepting the POSTed data from the ESP8266

API receiving data

Here is a sneak peek of the code accpeing the Posted Data

API COde

The final code will be open sourced.

API with 2x sensors (18x more soon)

I built 2 sensors (on Breadboards) to start hitting the API

2 sensors on a breadboard

18 more sensors are ready for action (after I get tempporary USB power sorted)

18x Sensors

PiJuice and Battery save the Day

I accidentally used my Pi for a few hours (to develop the API) and I realised the power to the PiJuice was not connected.

The PiJuice worked a treat and supplied the Pi from battery

Battery power was disconnected

I plugged in the battery after 25% was drained.

Power Restored/

Research and Setup TRIM/Defrag on the M.2 SSD

Todo: Research

Add a Buzzer to the RaspBerry Pi and Connect to Pi Juice No Power Event

Todo

Wire Up a Speaker to the PiJuice

Todo: Figure out cusrom scripts and add a Piezo Speaker to the PiJuice to alert me of issues in future.

Add buttons to the enclosure

Todo

Add email alerts from the system

I logged into Google G-Suite (my domain’s email provider) and set up an email alias for my domain “[email protected]”, I added this alias to GMail (logged in with my GSuite account.

I created an app-specific password at G-Suite to allow my poi to use a dedicated password to access my email.

I installed these packages on the Raspberry Pi

sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail    

I can run this command to send an email to my primary email

sudo sendemail -f [email protected] -t [email protected]_domain.com -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected]_domain.com -xp ********************

The email arrives from the Raspberry Pi

Test Email Screenshot

PiJuice Alerts (email)

In created some python scripts and configured PiJuice to Email Me

user scripts

I assigned the scripts to Events

Added functions

Python Script (CronJob) to email the batteruy level every 6 hours

Todo

Building the Co2/PM2.5 Sensors

Todo: (Waiting for parts)

The AirGradient PCB’s have arrived

Air Gradient PCB's

NodeJS API writing to MySQL/Influx etc

Todo: Save Data to a Database

Setup 20x WeMos External Antennae’s (DONE, I ordered new factory rotated resistors)

I assumed the external antennae’s on the WeMos D1 Mini Pro’s were using the external antennae. Wrong.

I have to move 20x resistors (1 per WeMos) to switch over the the external antennae.

This will be fun as I added hot glue over the area where the resistior is to hold down the antennae.

Reading configuration files via SPIFFS

Todo

Power over Ethernet (PoE) (SKIP, WIll use plain old USB wall plugs)

Todo: Passive por PoE

Building a C# GUI for the Touch Panel

Todo (Started coding this)

Todo (Passive POE, 5v, 3.3v)?

Building the enclosures for the sensorsDesigned and ordered the PCB, FIrmware next.

Custom PCB?

Yes, See above

Backing up the Raspberry Pi M.2 Drive

This is quite easy as the M.2 Drive is connected to a USB Pliug. I shutdown the Pi and pugged int he M.2 board to my PC

I then Backed up the entire disk to my PC with Acronis Software (review here)

I now have a complete backup of my Pi on a remote network share (and my primary pc).

Version History

v0.9.63 – PCB v0.2 Designed and orderd.

v0.9.62 – 3/2/2022 Update

v0.9.61 – New Nginx, PHP, MySQL etc

v0.9.60 – Fresh Bullseye install (Buster upgrade failed)

v0.951 Email Code in PiJUice

v0.95 Added Email Code

v0.94 Added Todo Areas.

v0.93 2x Sensors hitting the API, 18x sensors ready, Air Gradient

v0.92 DHT22 and Basic API

v0.91 Password Protection

v0.9 Final Battery Setup

v0.8 OTA Updates

v0.7 Screen Enclosure

v0.6 Added Wifi Test Info

v0.5 Initial Post

Filed Under: Analytics, API, Arduino, Cloud, Code, GUI, IoT, Linux, MySQL, NGINX, NodeJS, OS Tagged With: api, ESP8266, MySQL, nginx, raspberry pi, WeMos

Connecting to a server via SSH with Putty

April 7, 2019 by Simon

This post aims to show how you can connect to a remote VM server using Telnet/SSH Secure shell with a free program called Putty on Windows. This not an advanced guide, I hope you find it useful.

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

You will learn how to connect (via Windows) to a remote computer (Linux) over the Telnet protocol using SSH (Secure Shell). Once you login you can remotely edit web pages, learn to code, install programs or do just about anything.

Common Terms (Glossary)

  • Putty: Putty is a free program that allows you to connect to a server via Telnet. Putty can be downloaded from here.
  • Port: A port is a number given to a virtual lane on the internet (a port is similar to a frequency in radio waves but all ports share the same transport layer frequency on the internet). Older unencrypted webpages work on Port (lane 80), older mail worked on Port 25, encrypted web pages work on Port 443. Telnet (that SSH Secure Shell uses) used Port 22. Read about port numbers here.
  • SSH: SSH is a standard that allows you to securely connect to a server over the telnet protocol. Read more here.
  • Shell: Shell or Unix Shell is the name given to the interactive command line interface to Linux. Read more about the shell here.
  • Telnet: Telnet is a standard on the TCP/IP protocol that allows two-way communication between computers (all communicatin issent as characters and not graphics). Read more on telnet here and read about the TCP protocols here and here.
  • VM: VM stands for Virtual Machine and is a name given to a server you can buy (but it is owned by someone else). Read more here.

Read about other common glossary terms used on the Inetre here:
https://en.wikipedia.org/wiki/Glossary_of_Internet-related_terms

Background

If you want a webpage on the internet (or just a server to learn how to program) it’s easier to rent a VM for a few dollars a month and manage it yourself (with Telnet/SSH Secure Shell) than it is to buy a $5,000 server, place it in a data centre and pay for electricity and drive in every few days and update it. Remote management of VM servers via SSH/Secure Shell is the way for small to medium solutions.

  • A simple web hosting site may cost < $5 a month but is very limited.
  • A self-managed VM costs about $5 a month
  • A website service like Wix, Squarespace, Shopify or WordPress will cost about $30~99 a month.
  • A self-owned server will cost hundreds to thousands upfront.

There are pros and cons to all solutions above (e.g cost, security, scalability, performance, risk) but these are outside this post’s topic. I have deployed VMs on provides like AWS, Digital Ocean, Vultr and UpCloud for years. If you need to buy a VM you can use this link and get $25 free credit.

I used to use the OSX Operating System on Apple computers. I was used to using the VSSH software program to connect to servers deployed on UpCloud (using this method). With the demise of my old Apple Mac book (due to heat) I have moved back to using Windows (I am never using Apple hardware again until they solve the heat issues).

Also, I prefer to use Linux servers in the cloud (over say Windows) because I believe they are cheaper, faster and more secure.

Enough talking lets configure a connection.

Public and Private Keys?

Whenever you want to connect to a remote server via Telnet/SSH Secure Shell you will need a public and private key to encrypt communications between you and the remote server.

The public key is configured on your server (on Linux you add the public key to this file ~/.ssh/authorized_keys).

The private key is used by programs (usually on your local computer) to connect to the remote server.


How to create a Public and Private Key on Linux

I usually run this command on Ubuntu or Debian Linux to generate a public and private SSH key.

sudo ssh-keygen -t rsa -b 4096

The key below was generated for this post and is not used online. Keys are like physical keys, people who have them and know where to use them can use them.

Output:

Generating public/private rsa key pair.
Enter file in which to save the key (/username/.ssh/id_rsa): ./server
Enter passphrase (empty for no passphrase): ********
Enter same passphrase again: ********
Your identification has been saved in ./server.
Your public key has been saved in ./server.pub.
The key fingerprint is:
SHA256:sxfcyn4oHQ1ugAdIEGwetd5YhxB8wsVFxANRaBUpJF4 [email protected]
The key's randomart image is:
+---[RSA 4096]----+
| .oB**[email protected]       |
|  +.==B.+        |
| o .o+o+..       |
|  .. +..o...     |
|    o ..Sooo.    |
|         ++o.    |
|        .o+o     |
|        .oo .    |
|         ...     |
+----[SHA256]-----+

The two files were created

server
server.pub
  • “server” is the private key
  • “server.pub“is the public key

Public/Private Key Contents

Public Key Contents (“server.pub”)

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7Xo9bOCXJ7gVjP8tKOxHVId3KTo5I0VRU/kSRK3+mGd5VbDbQABo3tdWzYhzkjODzRS9TeL2dcLAQNNQKshi9IW5IGDS1NocSCLFQId5BFr9s3E79fkWqcZkKmwocepXOOZ91EDKgIFxviOzZKe99sdxxMoZzi1nxgVyXl4TnaelyiQxeKYniVs1iqDfYWQCxkKsmYit8TvGtOwrhLvKNh9362/y5ebpXVdFlRuB83eF7k8RHNYCQyOJJVx4cw3TIsAN0GMOwjuaOZbp7rR1d6k7RZmaApNRTbaWOXy32UiBST5TV/jXF2UL/4IBnn+yvCrM0v79e/3omgjlVVKfWByFzMv/YlBKCAX3xxtJQ9RkzTqseKupXmmJU0rik6Xuz31N2oyw4M7yJofSUGVCN0pnpKEvnKxqolfD9egdQy2XDaNioY7cvOO1qRegCKE0sDh1m5MzJWMhbDs7macSMyd6+0O5qWc/ByHy0G/mVbd8kO4jIuEzEs4IFkPCToEZp7KfkY7KRkOhccLbQ4ApCesUfBtGGAN1f33NnXCHae3Cx46nSd23fvgDZUVnjI47tNJH5Z8FNVlW/fp5Rgeu/aPUephnDX2IBxwIKQOmSTDY+nxU4V+c93H1gSOJfvqYbVKIAXKyN9Yh6LC44ZvLrL4q0TC0QlH2+kxMLnuj7zw== [email protected]

Private Key Contents (“server”), always keep the private key safe and never publish it.

-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,D34670C40CE3778974BEF97094010597

b4oecyqLsWt9n+G12ldVNlaQxSKF1wSrlBPg6FGiHRauTCyreUwoI2dMOAkwnGmN
8fcy51fH7D3Kg0G9fWWNPd+oUDwZmrpB8Mv6Ndk4bLYZEbkNOFgvPwNre7edTBOD
JGZRdWqb+yrywgvz3iTXPNjNK5REU3u3JmD69jInFNo92j765QQKA4sFgEyD/8g+
zg8yefIQAhEsVELC5LXPPyuTfA+x0Q+040PqCJ+FCISJI1CeZjLwk7Fbe453Vj81
zaDsurl5X5gaRUlVjB2asr6etWdMLWcalX4Nbyj2A10L3J4ONjKq3Wc2muJ0Q6ES
oNqBaU2iHPlK8yK0TGj/ERfjaG1qdlhBcow0pSapRqGopXBuVBLVuyc2NHe5CCTk
Ezq+LZGsVYmiOIIY4QRJdEN/DVLFHRGK/xA9A7unm484zXIEO6wznE0DuCTtyZs0
luJ3bKLRcack3K1Dphq0LjSG4YxQlkHewa9k9AKpDPTqeeKKckySakiDCGPT6htk
VqaCKrApAt6GQ2hLVXZ0BFVN5A3WUJ5s+HpFvTUzHTNZcdsVS4PgxhuCtnSO/BdS
/G+ODc4aZJNYQD9QQfWUnxkgnQJCWJ+aBZtKF7eDPRYY7qD9jWxubDzrFplBkmAi
O+aX5N8dpU3lEty4INjyh5LpgZW3swjUhEKWi/c1k+Qd1gCWzYzwAq2BfpWcF8Z+
c+y9lQUKbq2yDlxReCIsfb/hda5k1HjgaUlhKbjWIITSlGqf/NE9i+vj0rQEMQXQ
mxBoilfLUPd5A1ttG5XvqC2ex5HBmjzCazZ13Z/2c/PkwicHBmrf5bKYHZp49niV
44n8tZRamCUv6HaJUaKR22MigOG/qGppGPodGeLNj1DFLYAEQ78SYcVhEqIICBo1
t1yaIemUq8MWXSZz1K3cP4FEXQcEziQxFLU/0DCE0P0mIU3MExUmjB/nVE8vxb5l
p3ej3yrRGe+P2neco2gttgaTEi6l/S+0TIiZNstnVPG48BPW71mwVg9XR1d+avO7
OpXt0UgocX0xp7zBgK2up8Ai6v66WwjoNgyvFe02aK4/+fSC+aJ5D6N7JVNxd/bn
Py4W8oLKnrE1PKtIfBw/aE+rgudaMIyuxCaLllRKyDxVPPiJFp2iFcH/Y+k+0vDa
xE9Jpdd0zOWkZyebAxrS8zAUUNNaTQ+rWkj/zORjE4ptHpdwdazzHoQwIs+1kjsv
e/+JEmoskH7XozLnxClVhhWMXWfgQsPWBqPnGzieW0tv9SeIAU/BLJCHJRhBMAT1
ugBtcda1VMlAPVroYtVyUdCxkYZqGfIDbKqtOvvuBgUIUe/HnC3ExQQycC9F05BH
RJibaM/11MLTcZSO7KOK65Dg2v3VBhe6rfDl4tTR0yOySPXCacb9aMt2pMPTEe0/
wU49wCefchfD2bsR3kXPpUqm+HbkHORpIwsMZfQO/8dooXYdiYUdzV9roXG6OGVQ
SsV/xR2lE3XrR71TBegfRnQirI8tj4psSor+yCj3qV936Oh31D96Z6P4glshibsG
ffWAO/TSdu5ZV+UVahh6bTozs+g+odUu/S48TeI1fk7lPlqwZdjoSHXUI2v1FAQ2
jSSywuZQxHlGhg6OeI052cxx3zcVyVVLFHhIrfvufNc3c3+KYhtyiSzBNYN1BrJi
xNXwlDS1jYWgRHkf9zbNBU0MLTYHjZZvO9Jpl/UhKKBdIvJFwmGmXS2lgU6slunJ
Ojp4tY1tbI520KOskV/OoqEfmhXh5fTlI3onzoK1aLqxk1d0d65ONcxqVbAG79RN
b0Q5PgewSOgFlcZ7tEIZKAWsWVhjlFTSGRujdZVM1vZB9fCJesemai7HU0e4J+Do
tqvss8I2n6TPxlTYFzQ4w12pIiOzx/8cFLX78NLN8wQFElhhczeuW5HDAnmPxYhQ
eLY0HgDCFSvVAvGXo0j1gcBUcOr/LzZSsJhxsB7FKyrUjlmD/7Y45WoKJj41bKL+
y4+iDhXyLBiqVClRijsguwiCkmPFiR7Bng2pglS0oIWPWu1UbTJWVJPfuUTOBC+M
4/2fBtgFjUz8iUISs9ncEKkERlxodBIu+ekgLJZAigSMvUKfGE1YB1AA9x96VLjd
VJSjjWvnhMEoSwNzlNQ9+dhoD5Cg9zicgIIKnHnovYGOu8g9ZWfvhJFrKZgkfLRv
r2KgkWiHWpf0swiyGUOlGJDe39nMMkoxib7XE/J3VI3na1ZUOIf8kl9kdHXJ0R3C
2IjdbfiFHEDOrakp5oeVf8BbLK7RB8OlxgJAS47Byh8j97U7f13A5ZYlK3bkZ7E4
h7mCJQozgWP81ut0d9WUlcKp5M8yg2ctZ7h4oeG4Js4ceHqd19Z4P+1xWKwXcdmV
+uhiTftevTu3/UhYQVV4ck98C9pursJJYL5hTnIIpTSWIR+jSahhtzUy/upjugPp
cKi6eGlOkcHdKNRtiu7/IZqni85fC8PAwPZ93SICdiq6BpGaGWFh046weIJuflSK
Pd76+M70YRd+pkaRjJyFJ3hLyg7W5mlOb1+yBIlXKzpbch9B5E4dRHCcOsg4+v/9
exRgAnvUIhR/GpSySDDwgKHg8rAyjjoGeZFH3TJIemAAimyaR608a9tCn7SxVobs
UQlZ9WwC0dQIEv7mSvSige3imbybPtCoBHJAqsJqKCFJEDWbIF5l2VYZcfJUYaEI
oZAJHYGnZm33yQ6eSOusXJ2SnnGZ+ZsGO4bDVSwN20FkSt11gN8Wjrki9CxeVQp7
dWbKX1r/lZw74yUB4cYN23hgLJsdqvM7THzwlBkVtgV74RGY0qv59ecBUSQedlSK
dkOnkmoCiGRSNyf+ebijQaygnfK0ArG5wiRF/RQWiPFj7S6DHRxIOrXqcmvhJ7Ly
NApn9pPYyoZEAbk82MAXkapZ5+YLIKLjdNsYuKq5xVty+mc+FfxLWmZGX+QQinra
Z9DfY9KQw4rxJ/ju4ILnDrygm/QBsNFXBojOuzOIULt7c26s3d/47T+IXA4SIX4v
cPqYa6S3PU/Yoe5/Ya3tFxXmBXgEgVLZuujMs7dyCOAqLEyBEHYqIclp+TElWQLR
V660fczVXeedfd2tNBy1IBj1vhGa9j5mZLbFwTczykwCFfihLIrxSEc1MQA4CaSX
-----END RSA PRIVATE KEY-----

The Public and Private keys is used to encrypt all Telnet/SSH connections and traffic to your server. Keep these key’s private.

fyi: Putty can create SSH Keys too

If you do not have a Linux computer or Linux server to generate keys the Putty generator can create keys too.

Puttygen generating a key based on the randomness of mouse movements.

I did not know Putty can create keys.

Do save the public and private key(s) that were generated in Puttygen (tip: PPK files are what we are after along with the public key later in this post).

Public keys are added to your server when you deploy them. On Linux, you can add new Pulic keys after deployment by adding them to this file “~/.ssh/authorized_keys” to allow people to log in.

Puttygen does format the keys differently than how Ubuntu generates them. Read more here. I’ll keep generating keys in Linux over Puttygen.

Output of the public and PPK files from Puttygen

Putty SSH Client on Windows

Putty is a free windows program that you can use to connect to serves via SSH. Download and install the Putty program.

Open Putty

Putty Icon

Default Putty User Interface.

Screenshot of the Putty Program

To create a connection add an exiting IP address (server name) and SSH port (22) to Putty.

Screenshot of an IP and port entered into putty

In Putty (note the tree view to the left of the image), You can set the auto login name to use to log into the remote server under the Connection the Data in the tree view item

Screenshot showing the SSH usename being added to putty under Connection then Data menu,

You can also set the username under the Connection then Rlogin section of Putty.

Set the usernmae undser rlogin area of putty

OK, lets add the private SSH Key to Putty.

Putty Screehshot showing no support for standard SSH keys (only PPK files)

It looks like Putty only supports PPK private key files not ones generated by Linux. I used to be able to use the private key in the VSSH program on OSX and add the private key to connect to the server over SSH. Putty does not allow you to use Linux generated Private keys directly.

Convert your (Linux generated) private key to (Putty) PPK format with Puttygen

Putty comes with a Key Generator/Converter, you can open your existing RSA private key and convert it (or generate a new one).

TIP: If you generate a key in Puttygen don;t forget to ad’d it to your authorized host file in your remote server.

Open Puttygen

Puttygen icon

Click Conversions than Import Key and choose the private key you generated in Linux

Screenshot showing import RSA key to convert

The private key will be opened

Screenshot of imported RSA key

You can then save the private key as a PPK file.

Save the private key as a PPK file
“server.ppk” Key contents (sample key)
PuTTY-User-Key-File-2: ssh-rsa
Encryption: aes256-cbc
Comment: imported-openssh-key
Public-Lines: 12
AAAAB3NzaC1yc2EAAAADAQABAAACAQC7Xo9bOCXJ7gVjP8tKOxHVId3KTo5I0VRU
/kSRK3+mGd5VbDbQABo3tdWzYhzkjODzRS9TeL2dcLAQNNQKshi9IW5IGDS1NocS
CLFQId5BFr9s3E79fkWqcZkKmwocepXOOZ91EDKgIFxviOzZKe99sdxxMoZzi1nx
gVyXl4TnaelyiQxeKYniVs1iqDfYWQCxkKsmYit8TvGtOwrhLvKNh9362/y5ebpX
VdFlRuB83eF7k8RHNYCQyOJJVx4cwnTIsAN0GMOwjuaOZbp7rR1d6k7RZmaApNRT
baWOXy32UiBST5TV/jXF2UL/4IBnn+yvCrM0v79e/3omgjlVVKfWByFzMv/YlBKC
AX3xxtJQ9RkzTqseKupXmmJU0rik6Xuz31N2oyw4M7yJofSUGVCN0pnpKEvnKxqo
lfD9egdQy2XDaNioY7cvOO1qRegCKE0sDh1m5MzJWMhbDs7macSMyd6+0O5qWc/B
yHy0G/mVbd8kO4jIuEzEs4IFkPCToEZp7KfkY7KRkOhccLbQ4ApCesUfBtGGAN1f
33NnXCHae3Cx46nSd23fvgDZUVnjI47tNJH5Z8FNVlW/fp5Rgeu/aPUephnDX2IB
xwIKQOmSTDY+nxU4V+c93H1gSOJfvqYbVKIAXKyN9Yh6LC44ZvLrL4q0TC0QlH2+
kxMLnuj7zw==
Private-Lines: 28
DkpbM78GgGBSgfs9MsmZwDJj6HFXdoe+fCP1rLnwbE99mvU6Fbs23hXd+FsVdQbb
VR5tKTocV7tEwGjtLCHSTSF6gap0l4ww0Ecuvr/Dra2CJ2BsntyssBrWnlUT7OlA
M9zKQAzywAy4AHkph0YvH4l7BcJ5V1pUltm2JDTU6+iFqXDsstUUEDcQ4u0EalWU
EEsW+quNSwO0HBHvWY6N7tbiuEN9L+cFYIdsJEDfqM4hNi+7Ym+SQq5FOPyA6gXa
vhujsjPQAWI3TFxh7EIvsPDMCXxWHL6qaDvOMmPPTZDbEvm4nQ5Kax9jWacPILn7
ezc7ZAiZdDiFbkF3TLyuHx71mjChZgLoZLWYfXR3MBEEYnkNO/7oSMRUwDzEyWKW
ZgqdUtGg0cR+qWvaxQTDQsN/DjB7jGgnlreF92S8xSsbk5GgpZnTQ1V0cm3oecB+
+JP90K4Fi979gPWnwTfg6ZvmLUiVz3uBbvegkT9CVZhhZXSKq53H+SjTZfKBPrM9
NHGLkYr1WjToGR39LMrh4X3KChGewMFyuxtpkEQV60eCnHZBHgTco2A0yriRprOP
Ks4qJXOtZnsMYMesUDX9W5wLc4HcRvRh2UBPw/8bPz6mNrBk8j5SPIwBrPMIBejd
4IPoYezaEFKPg2bP7dn+Nftz5CGagcV2g+zhE615dsWzX1P7yu/1dTmz9LXaMmN6
d+zJE8TtjeaoW5NE1H3Flj9rknzJW7xQfokhS5hMkOg6J0AA6Pk13tupu8WHMkVB
x7nVu876f8tIbT8GzXCGgSl+zS7IJO3pt9T9QHIYa+T3oTIUqfBfK1WffUZwHMRn
Xn/VKUtIIIPiVfCtQuxSrTiJzQcoJ/yvfv62YAGv2LsDlBoHfXRdf6h3TCCCOVxT
WE45sbj3gJ1Cgjt1SEd/8A3hkstn2U2NKBI9gkB9H5BbDJoAXq6/4CkwaQvSEzs7
LK5btRlWop+7gqkyMPpgxv9li9IEDJ999ufMqxkFgOBkmkR5Si71elXRnwiKrjfU
Ce14iy7Dd7lb7IU9OEBjWFZlSigVEnc8klhGHDuxnojiW1ld7pUDIkAAbdTMOFON
abcpfNwcg5Y3l+1KwIQHuewAUuA9472jV4V9EAn7pJ7wgmYHbzMzg9Z9dM8h/3UI
axBzAW+cJM80gN+nZMbmDC9FkXV16GSuqC2iQUVGb2TIheAS7oCR+JFZFQNv0ytF
rGQ9K1wIGbMI4oDPcAid7DzrEXVl3d2x8MtwF/WzfHehVJD1h1uNwezLf1gBKyas
9GBfDOYwd8zgaL2H99GYD1Ba7TePJY81mx7m10eYdwDj1vCpboKE3cE6AyL7ki+4
Ix7GSzQs9NBckF9+8eVXe2T4Cc450hIoN0BWcxVUUdGCA1skZ1PczPs1z/ae4lxd
l5WmPy8Gyh7cnZpyqzvwAPSFDadkNP60eekfkRHyo4QyLhj7QZtO0kOgWhT3CHma
FjZ5jJu59U/4gc0TpQ8ra3vgKQKudloExsg027+34nR98dN+zzUj4S2C/J34W98C
DEEu/SO7nfW/a2UARXBKWCbS+3j24zHc9dbgX2tZoAoInUvRGiSOsLVsMhDiBoyb
wWoNxrKPR3Fi5zZ+GfDUgUGpZoW/b54KnFouIHBYbI41Gkh4vj6lxOGh/sb3SPHd
Wg6EN/0z/mer3bG0a2/ZHKYA5KGWRXWYvYLz4Je8fb/egBrSU6BztwSNeilzA9lI
J4BO7pzXECnWYutB14UxHw==
Private-MAC: 12298fa865ac574da81898252e83b812200cba59

Now the PPK key can be added to Putty for any server connection that uses the public key. Use the right key for the right server though.

Add the private key to a Putty server by clicking Connection, SSH, AUTH section and browing to the PPK file.

Screenshot showing the PPK key file added to Putty

Now we need to save the connection, click back on the Session note at the top of the treeview, type a server name and click Save

Save Putty connection.

Connecting to your sever via Telnet/SSH wiht Putty.

Once you have added a server name, port, usernames and private key to Putty you can double click the server list item to connect to your server.

You will see a message about accepting the public key from the server. Click Yes. This fingerprint will be the same fingerprint that was shown when you generated the keys (if not maybe someone is hacking in the middle of your local computer and server)

Putty messgae box asking to to remember the public key

Hopefully, you will now have full access to your server with the account you logged in with.

Screenshot of an Ubuntu screen after login

Happy Coding.

Alternatives to self-managed VM’s

I will always run self-managed server (and configure it myself) as its the most economical way to build a fast and secure server in my humble opinion.

I have blogged about alternatives but these solutions always sacrifice something and costs are usually higher and performance can be slower.

I am also lucky enough I can do this as a hobby and its not my day job. when you self manage a VM you will have endless tasks or securing your server and tweaking but its fun.

More Reading

Read some useful Linux commands here and read my past guides here. If you want to buy a domain name click here.

If you are bored and want to learn more about SSH Secure shell read this.

Related Blog Posts

  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Useful Linux Terminal Commands
  • Setup two factor authenticator protection at login (SSH) on Ubuntu or Debian
  • etc

Version: 1.1 Added MobaXterm link

Filed Under: 2FA, Authorization, AWS, Cloud, Digital Ocean, Linux, Putty, Secure Shell, Security, Server, SSH, Ubuntu, UpCloud, VM, Vultr Tagged With: Connecting, Putty, secure, server, Shell, ssh

How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains

March 31, 2019 by Simon

This is a quick guide that will show you how you can connect to a cloud server via SFTP with the PHPStorm IDE from Jet Brians and deploy files from your localhost to the cloud. This is my opinion, I am not paid to promote PHPStorm or UpCloud.

Pre-Requisites/Assumptions

This guide will assume you already have (or know how to)..

  • Buy a domain name and point it’s DNS to a server (I use Namecheap.com for buying domains)
  • Buy and deploy a server in the cloud. I have used AWS, Digital Ocean, Vultr but now use UpCloud for deploying fast self-managed servers. Read this guide here to see how I create a server from scratch on Up Cloud.
  • Setup SSH access to your server and configure a firewall.
  • You have or know how to set up PHP and Web Servers and configure them on your localhost and remote server (guides here, here, here, here, here and here).
  • etc ( check out all my guides here https://fearby.com/all )

I am using Windows 10 Home (with IIS Web Server (document root redirected to S:\Code\) and a pre built Ubuntu Cloud servers.

IIS pointing to S:\Code

Why no FTP? I do not create FTP servers on my serves to increase security and I only access servers via SSH via white-listed IP’s and then authenticate with hardware 2FA keys from YubiCo (read me 2FA guide here and also how to secure *nix servers and WordPress with 2FA).

Background

2 years ago I used to use the Cloud 9 IDE to connect to, and code files on cloud servers and life was good. I could configure and connect to servers, drag and drop files, run bash scripts from a web page and close the Cloud 9 browsers tabs, travel hundreds of kilometres and log back into C9 and all code and bash scripts would reappear.

Here is the Cloud 9 IDE showing code on the left and a Browser on the right.

C9.IDE showing code on the left and web page on the right

With Cloud 9 code could be easily accessed, edited and run.

See screenshot of code running from a Cloud 9 hosted server with properties windows on the right.

C9 IDE showing  a bash terminal windows and code

Regrettably, I cancelled my $9/m subscription to Cloud 9 after a minor stroke and since then I have gone back to a terminal screen to code and transfer files. In the last 2 years.

Screenshot of the putty program connected to a Ubuntu box editing a file with nano

Uploading and downloading files on mass is painful via pure SSH.

AWS has since purchased Cloud 9 and I am not sure if it will ditch support of non-AWS servers in the future. AWS is good but servers are very expensive for what you get IMHO. I have found Disk IO on UpCloud is awesome (also UpCloud support is great (I am not paid to say that)).

PHPStorm IDE?

A quick Google of IDE’s like Cloud 9 mentioned PHPStorm from Jet Brains. I have used IntelliJ IDEA from Jet Brains before and PHPStorm seems to be very popular.

Go to
https://www.jetbrains.com/phpstorm/ and see the features of PHPStorm

Watch PHPStorm in action

Whats new in PHPStorm 2019.1

Install PHPStorm

Visit https://www.jetbrains.com/phpstorm/download/ and download and install PHPStorm (free trial 30 days)

System requirements

  • Microsoft 10/8/7/Vista/2003/XP (incl. 64-bit)
  • 2 GB RAM minimum
  • 4 GB RAM recommended
  • 1024×768 minimum screen resolution

Pricing

PHPStorm is $8.90/m for individual use (or $19.90/m commercial). For the first 12 months of uninterrupted subscription payments qualify you for receiving a perpetual fallback license (20% discount for an uninterrupted subscription for a 2nd year, 40% discount for an uninterrupted subscription for 3rd year onwards).

PHPStorm work on Windows, OSX or Linux. This great an I use Windows locally and Linux remotely but I’m keen to use Linux locally to match local and remote dev environments.

Official PHPStorm Pricing Page:
https://www.jetbrains.com/phpstorm/buy/#edition=commercial

fyi: Jet Brains has free licencing for individual use for Students and faculty members.

Creating a Project

Open PHPStorm and select “Create New Project”.

Create New Project screen

Choose a project type on the left (e.g “PHP Empty Project“) and choose a location to save too (I chose “S:\code\php001) on my local machine. I chose “S:\code\php001” on my local machine.

New Project screenshot asking for a name and save location

Choose a folder to save to.

Choose a project and location to save to locally

Click “OK” to create the project.

PHPStorm will have created a project for you. You will notice a “.idea“folder under the location you saved with these files.

  • misc.xml
  • modules.xml
  • php001.xml
  • wordspace.xml

Do not delete these files.

Creating your first PHP file

You can right click on the project root and select New then PHP File

Right click on the root in the tree view then new PHP File

Or clicking the File then New menu choosing PHP File.

File new PHP File dialog

Name the file (e.g index.php)

Naming a file index.php

The file has been created and its available in my localhost web server.

Screenshot showing PHPStorm with index.php, S:\Code showing index.php and http://localhpst/php001/index.php loading

Creating a Deploy Target

Now we need to specify a deploy target in PHPStorm to push the file changes to the cloud. Backup your server (yes backup your server just in case).

Open your PHPStorm project and click Tools, Deployment and then Configuration.

Click Tools, Deployment and then Configuration.

Click the plus icon near the top left and choose SFTP

Screenshot showing add new deployment server (SFTP)

Name the deployment target (e.g “server (project)”)

Screenshot of an input box showing a server name
  • Enter your “server name” or IP and port
  • Enter your “ssh username” (ensure the SSH user had write access to the wwwroot folder and the web server can read the files written by this user)
  • Under password I chose “Key pair OpenSSH or Putty (as I had SSH details already setup in Putty details
  • You can add your ppk private key from Putty (use the puttygen program to conbvert ssh public and private kets to ppk format)
  • If you have a passphrase on your SSH key add it now
  • Enter your web servers remote path (for the project)
  • Enter your web server URL
Screenshot showing a server name, port, username, password, ssh file passphrase, root path and web server url.

I did SSH to my remote server and created the destination folder. This will ensure I can deploy code here (PHPStorm does not create the remote path fpor you ).

mkdir /wwwroot/php001
chown -R www-data:www-data /wwwroot/php001/

Click Test Connection

Test Successful screenshot

No we need to click the Mappings tab and add a mapping.

  • Local path is your local path
  • Deployment path is / (the web root path is carried forward from the previous tab)
  • Web path is the web path that is entered in the browser
Screenshot showing a manual file mapping of local and remote file locations

Click Add New Mapping. Now we are ready to deploy

Deploying code to the cloud

I right clicked on the root note in PHPStorm and created an index.php file.

Creating an index.php file by file new

I edited the index.php on my local machine and then click the Tools then Deployment and choose “Upload to fearby.com (php001)” menu.

Manual upload available in Tools menu then Deployment menu

The File Transfer output window showed the transfer progress.

Screenshot showing the file transfer window output saying the file uploaded.

I loaded https://fearby.com/php001/index.php in Google chrome. It worked.

Screenshot showing https://fearby.com/php001/index.php loaded in a bowser

Don’t forget to turn off Automatic uploads under Tools, Deployment menu.

'Screenshot showing Automatic updated turned on

Now when I create new files or change existing files they will auto upload.

Sourc Control

I will add this soon.

Shell Command

You can also open an SSH console to the server and run commands

e.g zip files

zip -r backup.zip .

I can also open a folder window in PHPStorm and show all remote files by clicking Tools then Deployment then Show Remote Files, Zip files can be easily downlaoded or other files uploaded. Nice.

Screenshot showing remote files

Linux Client

I will review the Linux PHPStorm client soon.

Troubleshooting

Watch the Official guide on Deployment and Remote Hosts in PhpStorm – PhpStorm Video Tutorial

Good Luck. I hope this guide helps someone.

Version

1.2 Removed advertisements

1.1 Minor Updates

1.0 Initial Version

Filed Under: 2FA, Backup, Cloud, Code, Git, GUI, IDE, Linux, SSH

Updating NGINX to the development branch to get more frequent updates and features over the stable branch

November 20, 2018 by Simon

Updating NGINX to the development branch (on Ubuntu) to get more frequent updates and features over the stable branch

Aside

I have a number of guides on moving away from CPanel, Setting up VM’s on UpCloud, AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. View all recent posts here https://fearby.com/all/

Now on with the post

Warning

Backup your Nginx and Server before making any changes. The Nginx development branch is quite stable but anything can happen. If your site is mission-critical then stay on the stable branch.

Nginx Branches

By default, you will most likely get the stable branch of Nginx when instaling and updating Nginx.  I have been running the stable version for the last few years but was made aware of a DDoS vulnerability in Nginx.

Here is a good write-up on development merges into the stable branch.

Nginx Updates

Widely-used #Nginx server releases versions 1.15.6 and 1.14.1 to patch two HTTP/2 implementation vulnerabilities that might cause excessive memory consumption (CVE-2018-16843) & CPU usage (CVE-2018-16844), allowing a remote attacker to perform #DoS attackhttps://t.co/1Z3JoghoBr pic.twitter.com/qQ3pOFD1Lk

— The Hacker News (@TheHackersNews) November 9, 2018

I was aware recently of a DDoS bug affecting Nginx and the recommendation was to update ot Nginx 1.15.6 development branch (or 1.14.1 stable branch).

A few days ago no 1.14.1 update was available but a 1.15.6 was, should I switch to the development branch to get updates earlier?

Reminder to update your #nginx installations to the 1.14.1 stable or the 1.15.6 mainline versions for critical security patches released this week. #NGINXPlus customers, see instructions for updating based on the patch released 10/30 https://t.co/KitsOWIJkb

— NGINX, Inc. (@nginx) November 8, 2018

Recent Nginx Changes

Here are the recent changes to Nginx: http://nginx.org/en/CHANGES

Changes with nginx 1.15.6                                        06 Nov 2018

    *) Security: when using HTTP/2 a client might cause excessive memory
       consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844).

    *) Security: processing of a specially crafted mp4 file with the
       ngx_http_mp4_module might result in worker process memory disclosure
       (CVE-2018-16845).

    *) Feature: the "proxy_socket_keepalive", "fastcgi_socket_keepalive",
       "grpc_socket_keepalive", "memcached_socket_keepalive",
       "scgi_socket_keepalive", and "uwsgi_socket_keepalive" directives.

    *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL
       1.1.1, the TLS 1.3 protocol was always enabled.

    *) Bugfix: working with gRPC backends might result in excessive memory
       consumption.


Changes with nginx 1.15.5                                        02 Oct 2018

    *) Bugfix: a segmentation fault might occur in a worker process when
       using OpenSSL 1.1.0h or newer; the bug had appeared in 1.15.4.

    *) Bugfix: of minor potential bugs.


Changes with nginx 1.15.4                                        25 Sep 2018

    *) Feature: now the "ssl_early_data" directive can be used with OpenSSL.

    *) Bugfix: in the ngx_http_uwsgi_module.
       Thanks to Chris Caputo.

    *) Bugfix: connections with some gRPC backends might not be cached when
       using the "keepalive" directive.

    *) Bugfix: a socket leak might occur when using the "error_page"
       directive to redirect early request processing errors, notably errors
       with code 400.

    *) Bugfix: the "return" directive did not change the response code when
       returning errors if the request was redirected by the "error_page"
       directive.

    *) Bugfix: standard error pages and responses of the
       ngx_http_autoindex_module module used the "bgcolor" attribute, and
       might be displayed incorrectly when using custom color settings in
       browsers.
       Thanks to Nova DasSarma.

    *) Change: the logging level of the "no suitable key share" and "no
       suitable signature algorithm" SSL errors has been lowered from "crit"
       to "info".


Changes with nginx 1.15.3                                        28 Aug 2018

    *) Feature: now TLSv1.3 can be used with BoringSSL.

    *) Feature: the "ssl_early_data" directive, currently available with
       BoringSSL.

    *) Feature: the "keepalive_timeout" and "keepalive_requests" directives
       in the "upstream" block.

    *) Bugfix: the ngx_http_dav_module did not truncate destination file
       when copying a file over an existing one with the COPY method.

    *) Bugfix: the ngx_http_dav_module used zero access rights on the
       destination file and did not preserve file modification time when
       moving a file between different file systems with the MOVE method.

    *) Bugfix: the ngx_http_dav_module used default access rights when
       copying a file with the COPY method.

    *) Workaround: some clients might not work when using HTTP/2; the bug
       had appeared in 1.13.5.

    *) Bugfix: nginx could not be built with LibreSSL 2.8.0.


Changes with nginx 1.15.2                                        24 Jul 2018

    *) Feature: the $ssl_preread_protocol variable in the
       ngx_stream_ssl_preread_module.

    *) Feature: now when using the "reset_timedout_connection" directive
       nginx will reset connections being closed with the 444 code.

    *) Change: a logging level of the "http request", "https proxy request",
       "unsupported protocol", and "version too low" SSL errors has been
       lowered from "crit" to "info".

    *) Bugfix: DNS requests were not resent if initial sending of a request
       failed.

    *) Bugfix: the "reuseport" parameter of the "listen" directive was
       ignored if the number of worker processes was specified after the
       "listen" directive.

    *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to
       switch off "ssl_prefer_server_ciphers" in a virtual server if it was
       switched on in the default server.

    *) Bugfix: SSL session reuse with upstream servers did not work with the
       TLS 1.3 protocol.


Changes with nginx 1.15.1                                        03 Jul 2018

    *) Feature: the "random" directive inside the "upstream" block.

    *) Feature: improved performance when using the "hash" and "ip_hash"
       directives with the "zone" directive.

    *) Feature: the "reuseport" parameter of the "listen" directive now uses
       SO_REUSEPORT_LB on FreeBSD 12.

    *) Bugfix: HTTP/2 server push did not work if SSL was terminated by a
       proxy server in front of nginx.

    *) Bugfix: the "tcp_nopush" directive was always used on backend
       connections.

    *) Bugfix: sending a disk-buffered request body to a gRPC backend might
       fail.


Changes with nginx 1.15.0                                        05 Jun 2018

    *) Change: the "ssl" directive is deprecated; the "ssl" parameter of the
       "listen" directive should be used instead.

    *) Change: now nginx detects missing SSL certificates during
       configuration testing when using the "ssl" parameter of the "listen"
       directive.

    *) Feature: now the stream module can handle multiple incoming UDP
       datagrams from a client within a single session.

    *) Bugfix: it was possible to specify an incorrect response code in the
       "proxy_cache_valid" directive.

    *) Bugfix: nginx could not be built by gcc 8.1.

    *) Bugfix: logging to syslog stopped on local IP address changes.

    *) Bugfix: nginx could not be built by clang with CUDA SDK installed;
       the bug had appeared in 1.13.8.

    *) Bugfix: "getsockopt(TCP_FASTOPEN) ... failed" messages might appear
       in logs during binary upgrade when using unix domain listen sockets
       on FreeBSD.

    *) Bugfix: nginx could not be built on Fedora 28 Linux.

    *) Bugfix: request processing rate might exceed configured rate when
       using the "limit_req" directive.

    *) Bugfix: in handling of client addresses when using unix domain listen
       sockets to work with datagrams on Linux.

    *) Bugfix: in memory allocation error handling.

Development branch changes are made every few weeks and stable branch changes are made less often.

Updating Nginx

Normally you update Nginx bu running an update and upgrade

apt-get update && apt-get upgrade

Restart Nginx for good measure

/etc/init.d/nginx restart

Checking NGINX Version

nginx -v
nginx version: nginx/1.14.1

Changing your repository to the development branch

I changed ot the development branch by running

sudo add-apt-repository ppa:nginx/development

Update and upgrade Nginx

apt-get update && apt-get upgrade

Restart Nginx for good measure

/etc/init.d/nginx restart

Checking NGINX Version

nginx -v
nginx version: nginx/1.16.6

Removing the stable Nginx repository

Run this command to remove the stable branch of Nginx

sudo add-apt-repository -r ppa:nginx/stable

Check to see if the development branch is listed

grep -r --include '*.list' '^deb ' /etc/apt/sources.list* |grep nginx
/etc/apt/sources.list.d/nginx-ubuntu-development-bionic.list:deb http://ppa.launchpad.net/nginx/development/ubuntu bionic main

Good luck and I hope this guide helps someone

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.0 Initial post

Filed Under: Linux, Ubuntu Tagged With: and, Branch, development, features, Frequent, get, more, nginx, over, stable, the, to, to the, updates, Updating

Adding two sub domains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

June 27, 2018 by Simon

Here is how I added two subdomains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

UpCloud performance is great.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Goal(s)

Setup 2x subdomains on https://fearby.com

– Sub Domain #1: https://test.fearby.com (pointing to a dedicated UpCloud VM in Singapore for testing).

– Sub Domain #2: https://audit.fearby.com (pointing to a sub-website on the NGINX/VM that runs https://fearby.com )

Let’s set up the first Sub Domain (dedicated VM) and SSL

Backup

Do back up your server first.

VM

I created a second server ($5 month or $0.07c hour 1,024MB Memory, 25GB Disk, 1024 GB Month Data Transfer) at UpCloud. If you don’t already have an account at UpCloud use this link to signup and get $25 free credit ( https://www.upcloud.com/register/?promo=D84793 ). Read my blog post on why UpCloud is awesome and how I moved my domain to UpCloud.

Once I spun up a second server I obtained the IPv4 and IPv6 IP addresses of the new “test” VM from the UpCloud dashboard.

IPV4 IP: 94.237.65.54
IPV6 IP: 2a04:3543:1000:2310:24b7:7cff:fe92:468c

DNS

These DNS records were already in place with my DNS provider (Cloudflare).

A fearby.com 209.50.48.88
AAAA fearby.com 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added these DNS records for the subdomains.

I added a new A NAME record for the new shared NGINX subdomain (for https://audit.fearby.com), this subdomain will be a sub-website that is running off the same server as https://fearby.com

A audit 209.50.48.88
AAAA audit 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added another set of records for the new dedicated VM  subdomain (for https://test.fearby.com)

A test 94.237.65.54
AAAA test 2a04:3543:1000:2310:24b7:7cff:fe92:468c

I waited for DNS to replicate around the globe by watching https://www.whatsmydns.net/

Setup a Firewall

On the new dedicated https://test.fearby.com VM, I installed the ufw firewall.

sudo apt-get install ufw

I configured the firewall to allow minimum ports (and added whitelisted IP for port 22 and added UpCloud DNS servers). I will lock this down some more later.

TIP: If your ISP does not offer a dedicated IP try a VPN. I use https://cyberghostvpn.com on OSX and Android.

Firewall rules.

sudo ufw status numbered

     To                         Action      From
     --                         ------      ----
[ 1] 22                         ALLOW IN    x.x.x.x
[ 2] 80                         ALLOW IN    Anywhere
[ 3] 443                        ALLOW IN    Anywhere
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 25                         DENY IN     Anywhere
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 53                         ALLOW IN    2a04:3540:53::1
[10] 53                         ALLOW IN    2a04:3544:53::1
[11] 22                         ALLOW IN    x.x.x.x.x.x.x.x.x
[12] 25 (v6)                    DENY IN     Anywhere (v6)

I enabled the firewall.

sudo ufw enable

Install NGINX (on https://test.fearby.com)

On the new dedicated https://test.fearby.com VM I…

Created a new www root

mkdir /www-root

Set permissions

sudo chown -R www-data:www-data /www-root

Installed NGINX

sudo apt-get update
sudo apt-get install nginx

I created a placeholder webpage

sudo nano /www-root/index.html

Configured the root value in /etc/nginx/sites-available/default

Created a symbolic link of the nginx config

sudo ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default

Lets Encrypt SSL

I have previously setup Lets encrypt on Ubuntu 16.04 but not 18.04. Certbot had info on setting up Lets Encrypt for 14.x 16.x and 17.x but not 18.x

Full credit for the SSL steps goes to @Linuxize ( tips on setting up Lets Encrypt on Ubuntu 18.04 ). Check out https://linuxize.com/

I installed Lets Encrypt certbot

sudo apt update
sudo apt install certbot

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Map requests to http://test.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

Create a /etc/nginx/snippets/letsencrypt.conf on http://test.fearby.com and enforce the redirect.

location ^~ /.well-known/acme-challenge/ {
  allow all;
  root /var/lib/letsencrypt/;
  default_type "text/plain";
  try_files $uri =404;
}

Create a /etc/nginx/snippets/ssl.conf file on http://test.fearby.com

ssl_dhparam /etc/ssl/certs/dhparam.pem;

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 30s;

add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d test.fearby.com

Certificates have been created 🙂

ls -al /etc/letsencrypt/live/test.fearby.com/
total 12
drwxr-xr-x 2 user user 4096 Jun 26 11:30 .
drwx------ 3 user user 4096 Jun 26 11:30 ..
-rw-r--r-- 1 user user  543 Jun 26 11:30 README
lrwxrwxrwx 1 user user   39 Jun 26 11:30 cert.pem -> ../../archive/test.fearby.com/cert1.pem
lrwxrwxrwx 1 user user   40 Jun 26 11:30 chain.pem -> ../../archive/test.fearby.com/chain1.pem
lrwxrwxrwx 1 user user   44 Jun 26 11:30 fullchain.pem -> ../../archive/test.fearby.com/fullchain1.pem
lrwxrwxrwx 1 user user   42 Jun 26 11:30 privkey.pem -> ../../archive/test.fearby.com/privkey1.pem

Now lets edit “/etc/nginx/sites-available/default” on https://test.fearby.com VM and add the cert paths.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        ssl_certificate /etc/letsencrypt/live/test.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/test.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/test.fearby.com/chain.pem;

        include snippets/ssl.conf;

        #ssl_stapling on; # Requires nginx >= 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        root /www-root/;

        include snippets/letsencrypt.conf;

        index index.html;

        server_name test.fearby.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

Now let’s setup the second subdomain (subsite off https://fearby.com) and SSL

VM

I already have NGINX on https://fearby.com set up a second site.

DNS

We have already set up a DNS record for https://audit.fearby.com (above)

Firewall

Already configured at https://fearby.com

SSL

Because I had an existing Comodo certificate on https://fearby.com I am going to repeat the steps above to generate a new certificate but save the NGINX config to /etc/nginx/sites-available/audit.fearby.com (this activates the second site)

TIP: Follow the Linuxize guide here (for creating ssl.conf, letsencrypt.conf etc config files), Do a backup and restore if need be.

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d audit.fearby.com

Configure NGINX

Map requests to http://audit.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

I created a new NGINX site ( /etc/nginx/sites-available/audit.fearby.com )

#proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-audit-root;

        # Listen Ports
        listen 80;
        listen [::]:80;
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name audit.fearby.com;

        include snippets/letsencrypt.conf;

        location / {
                try_files $uri $uri/ =404;
        }

        ssl_certificate /etc/letsencrypt/live/audit.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/audit.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/audit.fearby.com/chain.pem;

        ssl_dhparam /etc/ssl/certs/auditdhparam.pem;

        ssl_session_timeout 1d;
        #ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA38$

        ssl_prefer_server_ciphers on;

        ssl_stapling on;
        ssl_stapling_verify on;

        #resolver 8.8.8.8 8.8.4.4 valid=300s;
        #resolver_timeout 30s;

        add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }
}

I created a symbolic link of the config file

sudo ln -s /etc/nginx/sites-available/audit.fearby.com /etc/nginx/sites-enabled/audit.fearby.com

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

How to test the certificate renewal

sudo certbot renew --dry-run

Automate the renewal in crontab (every 12 hours)

I set this crontab entry up on https://fearby.com and https://test.fearby.com

crontab -e
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

Conclusion

Yes, I haVe 2 subdomains (1x dedicated VM and the other is a sub-website off an existing server) with SSL certificates.

Ping Results

ping -c 4 fearby.com
PING fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=220.000 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=290.602 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=311.938 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=330.841 ms

--- fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 220.000/288.345/330.841/41.948 ms

ping -c 4 test.fearby.com
PING test.fearby.com (94.237.65.54): 56 data bytes
64 bytes from 94.237.65.54: icmp_seq=0 ttl=44 time=333.590 ms
64 bytes from 94.237.65.54: icmp_seq=1 ttl=44 time=252.433 ms
64 bytes from 94.237.65.54: icmp_seq=2 ttl=44 time=271.153 ms
64 bytes from 94.237.65.54: icmp_seq=3 ttl=44 time=292.685 ms

--- test.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 252.433/287.465/333.590/30.200 ms

ping -c 4 audit.fearby.com
PING audit.fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=281.662 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=307.676 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=227.985 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=215.566 ms

--- audit.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 215.566/258.222/307.676/37.845 ms

Webpage Results

Screenshow showing the main site and 2 subdomains in a web browser

Troubleshooting

If you are having troubles generating the initial certificate check that you have not blocked port 80 and don’t have “Strict-Transport-Security” heavers enabled.

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d g
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for yoursubdomain.domain.com
Using the webroot path /var/lib/letsencrypt for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. yoursubdomain.domain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficzLlmg_w6Tc: q%!(EXTRA string=<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>)

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: yoursubdomain.domain.com
   Type:   unauthorized
   Detail: Invalid response from
   http://yoursubdomain.domain.com/.well-known/acme-challenge/_QA3jblEydx5mE8I8OdRsd2EdHIj4R-przLlmg_w6Tc:
   q%!(EXTRA string=<html>
   <head><title>404 Not Found</title></head>
   <body bgcolor="white">
   <center><h1>404 Not Found</h1></center>
   <hr><center>)

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A record(s) for that domain
   contain(s) the right IP address.

I re-ran the certbot command but pointed to the real /www-root (not/var/lib/letsencrypt/)

Create a new

mkdir /www-root/.well-known/
mkdir /www-root/.well-known/acme-challenge/
sudo certbot certonly --agree-tos --email [email protected] --webroot -w /www-root -d yoursubdomain.domain.com

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.1 Troubleshooting

v1.0 Initial Post

Filed Under: Linux, NGINX, ssl, Subdomain, Ubuntu, UpCloud, VM, Website Tagged With: a, Adding, an, and, domains, new, nginx, on, one, other, pointing, sub, subsite, the, to, two, Ubuntu 18.04, UpCloud, vm

How to use the UpCloud API to manage your UpCloud servers

June 17, 2018 by Simon

How to use the UpCloud API to manage your UpCloud servers.

If you have not read my previous posts I have now moved my blog etc to the awesome UpCloud host. Sign up using this link to get $25 free credit.

I recently compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here).

Here is my blog post on moving from Vultr to UpCloud.

Spoiler: UpCloud performance is great.

Upcloud Site Speed in GTMetrix

I have never had an UpCloud page load take longer than 2 seconds since moving.

UpCloud API

UpCloud has an API that we can opt into to using where we can manage servers. Read the official UpCloud API documentation here.

The API allows you to control:

  • Accounts
  • Pricing
  • Zones
  • Timezones
  • Plans
  • Servers
  • Storages
  • IP-Addresses
  • Firewall
  • Tags
  • etc

Create a sub-account to query the API

You should create a new user account (in the UpCloud dashboard) just for API access. I created two accounts for use on my server and on my home laptop and my server (and set a limiting IP(s) that can access it).

Create a Sub Account for API Access

Login to your UpCloud account (create an account here and get $25 free credit),

  1. Click My Accounts,
  2. Click User Accounts,
  3. Click Change on your user and enable API connections.
  4. TIP: Set up an IP rule to limit access to your API for security (I set up a VPN to get a static IP on my dynamic IP Internet host at home)).
  5. Save the changes

Enable API Connections

TIP: Lockdown the account to have the minimum permissions required.

e.g

  • Disable access to the control panel (Untick).
  • Allow API Connections (Tick) and specify an IP
  • Disable access to billing contact (Untick).
  • Disable access to billing section in the control panel (Untick).
  • Disable allowing of emails to billing contact (Untick).
  • Allow or Remove access to all server (or manually add access to desired servers)
  • Allow or Remove access to modify storage (or manually allow or remove access to desired storage)
  • etc

Lock down the account to the minimum needed

Save the account.

Now let’s make our first API call

I use OSX and I use the awesome Paw API testing tool from https://paw.cloud (This is not a plug, they are awesome). Postman is a popular API testing tool too. Any good programing language or CLI will allow you to send API requests.

First, let’s prepare the authorization string (this is a Base64 encoded combination of your username and password) read more here.

  1. Head over to https://www.base64encode.org/
  2. Click the Encode tab
  3. Add your “username:password” (without the quotes).
  4. Click Encode

A Base64 string will be outputted 🙂

e.g > eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk

fyi

You can encode also Encode and Decode Base64 from the Ubuntu Command line

Encode Base64 from the CLI Sample

echo -n 'yourapiusername:yoursupersecurepassword' | base64
eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk

Decode Base64 from the CLI Sample

echo `echo eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk | base64 --decode`
yourapiusername:yoursupersecurepassword

Now we can add an “Authorization Basic” token to the API request in Paw.

Authorization Header added with my base64 token.

A quick test of the UpCloud Prices API endpoint https://api.upcloud.com/1.2/price reveals the API is working.

Add Authorization Token

I can now see a full breakdown of my service prices in JSON 🙂

Query My Account

OK, Let’s see how much credit I have left by querying the https://api.upcloud.com/1.2/account, I duplicated the item in Paw and changed the URL to https://api.upcloud.com/1.2/account but no data returned?

I had to enable “Access to Billing section in Control Panel” for the user before this data returned from the API (make sense).

> HTTP/1.1 200 OK

Query (GET)

GET /1.2/account HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic *******************************************

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:23:32 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 91
Server: Apache

{
   "account" : {
      "credits" : 2500.00,
      "username" : "yourapiusername"
   }
}

“2500.00” = cents ($25)

Query All of Your Servers

Ok, Let’s get server information by querying https://api.upcloud.com/1.2/server

Query (GET)

GET /1.2/server HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:32:22 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 1154
Server: Apache

{
   "servers" : {
      "server" : [
         {
            "core_number" : "1",
            "hostname" : "server1nameredacted.com",
            "license" : 0,
            "memory_amount" : "2048",
            "plan" : "1xCPU-2GB",
            "plan_ipv4_bytes" : "3472464313",
            "plan_ipv6_bytes" : "166293599",
            "state" : "started",
            "tags" : {
               "tag" : [
                  "tag1"
               ]
            },
            "title" : "server1nameredacted.com",
            "uuid" : "########-####-####-####-############",
            "zone" : "us-chi1"
         },
         {
            "core_number" : "1",
            "hostname" : "server2nameredacted.com",
            "license" : 0,
            "memory_amount" : "1024",
            "plan" : "1xCPU-1GB",
            "plan_ipv4_bytes" : "198911",
            "plan_ipv6_bytes" : "19742",
            "state" : "started",
            "tags" : {
               "tag" : [
                  "tag2"
               ]
            },
            "title" : "server1nameredacted.com",
            "uuid" : "########-####-####-####-############",
            "zone" : "us-chi1"
         }
      ]
   }
}

Query Server Information

I have redated the UUID’s for my servers but once you know them you can query them by hitting https://api.upcloud.com/1.2/server/########-####-####-####-############

Query (GET)

GET /1.2/server/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:45:14 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 1656
Server: Apache

{
   "server" : {
      "boot_order" : "cdrom,disk",
      "core_number" : "1",
      "firewall" : "on",
      "host" : redacted,
      "hostname" : "server1nameredacted.com",
      "ip_addresses" : {
         "ip_address" : [
            {
               "access" : "private",
               "address" : "##.#.#.###",
               "family" : "IPv4"
            },
            {
               "access" : "public",
               "address" : "###.###.###.###",
               "family" : "IPv4",
               "part_of_plan" : "yes"
            },
            {
               "access" : "public",
               "address" : "####:####:####:####:####:####:########",
               "family" : "IPv6"
            }
         ]
      },
      "license" : 0,
      "memory_amount" : "2048",
      "nic_model" : "virtio",
      "plan" : "1xCPU-2GB",
      "plan_ipv4_bytes" : "3519033266",
      "plan_ipv6_bytes" : "168200052",
      "state" : "started",
      "storage_devices" : {
         "storage_device" : [
            {
               "address" : "virtio:0",
               "boot_disk" : "0",
               "part_of_plan" : "yes",
               "storage" : "########-####-####-####-############",
               "storage_size" : 50,
               "storage_title" : "system",
               "type" : "disk"
            }
         ]
      },
      "tags" : {
         "tag" : [
            "fearby"
         ]
      },
      "timezone" : "Australia/Sydney",
      "title" : "server1nameredacted.com",
      "uuid" : "########-####-####-####-############",
      "video_model" : "cirrus",
      "vnc" : "off",
      "vnc_password" : "#########################",
      "zone" : "us-chi1"
   }
}

The servers name, IPv4 and IPV6 network adapters are listed, CPU(s), Memory, Disk Sized and UUID’s are all visible 🙂

Surprisingly the VNC password is visible (enabling access to the root console).

TIP: Ensure your API account is safe and secure.

Query Storage Information

Now, Let’s query the storage with the GUID from above by querying https://api.upcloud.com/1.2/storage/########-####-####-####-############

Query (GET)

GET /1.2/storage/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic  ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:53:36 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 559
Server: Apache

{
   "storage" : {
      "access" : "private",
      "backup_rule" : {},
      "backups" : {
         "backup" : [
            "########-####-####-####-############"
         ]
      },
      "license" : 0,
      "part_of_plan" : "yes",
      "servers" : {
         "server" : [
            "########-####-####-####-############"
         ]
      },
      "size" : 50,
      "state" : "online",
      "tier" : "maxiops",
      "title" : "system",
      "type" : "normal",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

I can see information about the storage’s assigned server and backups 🙂

Query Backup Information

Backup storage can be queried with the same storge API endpoint https://api.upcloud.com/1.2/storage/########-####-####-####-############

Query (GET)

GET /1.2/storage/014fd483-ea90-4055-b445-bf2011951999 HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 05:01:11 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 412
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-16T04:47:56Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "online",
      "title" : "On-Demand Backup",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Rename Backup

One thing that I would like to be able to do is to rename on-demand backups in the UpCloud dashboard (this is not a feature yet) but I can rename manual backup’s in the API though 🙂

Boring “On-Demand Backup” label.

Rename Backups Not possible in the GUI

I tried sending JSON to https://api.upcloud.com/1.2/storage/########-####-####-####-############ to rename a backup but kept getting an error?

JSON

{
> “storage”: {
> “title”: “Latest manual backup , Working NGINX, PHP, MySQL w Tweaks”,
> “size”: “50”
> }
> }

Result

> “error_code” : “CONTENT_TYPE_INVALID”,
> “error_message” : “The Content-Type header has an invalid value.”

I googled and found an old manual for UpClouds API (official support here).

I added these missing content-type headers (108 was the length in chars of the payload)

> Content-Type: application/json; Charset=UTF-8'
> Content-Length: 108

Still no go?

I think the content-length value is wrong, more here.

I fixed it, it turned out I had a semicolon in the Content-Type value. The JSON RFC always assumes that Content-Type is UTF8 encoded (more here).

This Fails

Content-Type: application/json; charset=utf-8

This Works

Content-Type: application/json

Now I can rename my Backup (storage). I manually calculated the length of the JSON payload and added a “Content-Length” header and value.

Query (PUT)

PUT /1.2/storage/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Content-Type: application/json
Content-Length: 113
Authorization: Basic ##############base64hash##############

{"storage":{"size":"50","title":"Latest manual backup , Working NGINX, PHP, MySQL w Tweaks"}}

Output

HTTP/1.1 202 ACCEPTED
Date: Sun, 17 Jun 2018 05:47:02 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 453
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-16T04:47:56Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "online",
      "title" : "Latest manual backup , Working NGINX, PHP, MySQL w Tweaks",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Success 🙂

Backup Renamed

Create a Backup

Backups can be performed with a “/backup” added to the end of the query string.

Query (POST)

POST /1.2/storage/########-####-####-####-############/backup HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Content-Type: application/json
Content-Length: 100
Authorization: Basic ##############base64hash##############

{
  "storage": {
    "title": "Sunday 17th Latest backup , Working NGINX, PHP, MySQL w Tweaks"
  }
}

Output

HTTP/1.1 201 CREATED
Date: Sun, 17 Jun 2018 06:17:35 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 487
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-17T06:17:35Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "progress" : "0",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "maintenance",
      "title" : "Sunday 17th Latest backup , Working NGINX, PHP, MySQL w Tweaks",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Success (UpCloud GUI)

Conclusion

UpCloud does have great API docs.

I can easily integrate this into bash scripts to manage my servers via API and a future Java app for managing servers.

Paw does give CURL output to allow me to copy working API’s for use in BASH 🙂

More to come

  1. BASH Script to Deploy and configure a server on UpCloud via Initialization scripts (or manual) (1 week)
  2. JAVA App to manage your server (3 months)

If you are signing up for UpCloud please consider using my referral code and get $25 credit for free.

Read my setup guide here.

https://www.upcloud.com/register/?promo=D84793

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.1 updated typo

v1.0 Initial Post.

Filed Under: API, Backup, Cloud, Linux, Networking, Restore, UpCloud, VM Tagged With: api, How, Manage, servers, the, to, UpCloud, use, your

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 4 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 4 of 4

Read Part 1, Part 2, Part 3 or Part 4

I ran the MySQL benchmark preparation command again (no problem this time).

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=###################### prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Creating table 'sbtest'...
Creating 1000000 records in table 'sbtest'...

Test table and records created

Test Records Created

Now I can benchmark MySQL on my main server.

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=################################# --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run

RAW Output

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 8

Doing OLTP test.
Running mixed OLTP test
Doing read-only test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 7 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            336210
        write:                           0
        other:                           48030
        total:                           384240
    transactions:                        24015  (400.09 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 336210 (5601.24 per sec.)
    other operations:                    48030  (800.18 per sec.)

Test execution summary:
    total time:                          60.0242s
    total number of events:              24015
    total time taken by event execution: 480.0242
    per-request statistics:
         min:                                  1.79ms
         avg:                                 19.99ms
         max:                                141.00ms
         approx.  95 percentile:              37.49ms

Threads fairness:
    events (avg/stddev):           3001.8750/27.36
    execution time (avg/stddev):   60.0030/0.01

Results

queries performed (in 60 seconds):

  • read: 336210
  • other: 48030
  • total: 384240

I decided to add an index to see if I can speed this query up (read the MySQL index page here). I added an index (in Adminer) on the columns “Id” and “pad” for the sbtest table in the test database

I restarted the MySQL process

mysql restart
[ ok ] Restarting mysql (via systemctl): mysql.service.

I ran the same benchmark again.

Raw Output

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=########################## --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 8

Doing OLTP test.
Running mixed OLTP test
Doing read-only test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 7 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            426538
        write:                           0
        other:                           60934
        total:                           487472
    transactions:                        30467  (507.69 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 426538 (7107.67 per sec.)
    other operations:                    60934  (1015.38 per sec.)

Test execution summary:
    total time:                          60.0110s
    total number of events:              30467
    total time taken by event execution: 479.9124
    per-request statistics:
         min:                                  5.75ms
         avg:                                 15.75ms
         max:                                138.57ms
         approx.  95 percentile:              25.10ms

Threads fairness:
    events (avg/stddev):           3808.3750/8.70
    execution time (avg/stddev):   59.9891/0.00

Results

The quick index added 20% extra throughput on queries 🙂

Mysql before and after an index

Don’t forget to delete your test database

DROP DATABASE `test`;

Viewing MySQL Index Usage (on the “test” database)

Query to show Index stats for a table ‘test’

SELECT
 OBJECT_SCHEMA as 'Database', OBJECT_NAME as 'Table', 
 INDEX_NAME as 'Index', 
 COUNT_STAR, 
 SUM_TIMER_WAIT,  MIN_TIMER_WAIT, AVG_TIMER_WAIT, MAX_TIMER_WAIT, 
 COUNT_READ, 
 SUM_TIMER_READ, MIN_TIMER_READ, AVG_TIMER_READ, MAX_TIMER_READ,  
 COUNT_FETCH, SUM_TIMER_FETCH, MIN_TIMER_FETCH, AVG_TIMER_FETCH, MAX_TIMER_FETCH
FROM 
 performance_schema.table_io_waits_summary_by_index_usage
WHERE 
 object_schema = 'test'

I can see the MySQL PRIMARY index is getting used 🙂

Index Summary

Read more in viewable query stats (columns) here.

Other System Information Tools

Show processor information

cat /proc/cpuinfo

Output

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 61
model name      : Virtual CPU a7769a6388d5
stepping        : 2
microcode       : 0x1
cpu MHz         : 2394.454
cache size      : 16384 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single kaiser fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips        : 4788.90
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Memory Information

You can assign 512MB, 1GB, 2GB or more memory to a server on Vultr, Read my guide here on upgrading resources for Vultr VM’s here.

Only upgrade your server’s memory when server processes demand it, there is no need to pay for extra idle memory. Read my older guides on upgrading Digital Ocean and AWS servers.

I use the htop utility to monitor memory and processes. The memory usage will depend on how you have configured your server to use connection pools in code, MySQL or services.  Also what memory demands do you get in pean bandwidth times?

HTOP

You can check your server memory details on Ubuntu with this command

cat /proc/meminfo

Output

MemTotal:        2048104 kB
MemFree:           96176 kB
MemAvailable:     693072 kB
Buffers:          183476 kB
Cached:           526124 kB
SwapCached:            0 kB
Active:          1467220 kB
Inactive:         243228 kB
Active(anon):    1070464 kB
Inactive(anon):    27004 kB
Active(file):     396756 kB
Inactive(file):   216224 kB
Unevictable:        3652 kB
Mlocked:            3652 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                64 kB
Writeback:             0 kB
AnonPages:       1004504 kB
Mapped:           114664 kB
Shmem:             94192 kB
Slab:             192692 kB
SReclaimable:     171892 kB
SUnreclaim:        20800 kB
KernelStack:        3072 kB
PageTables:        20528 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1024052 kB
Committed_AS:    2424332 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:    247808 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       67440 kB
DirectMap2M:     2029568 kB

Use Memory or Disk (Swap)

You can configure the use of Memory over Disk by configuring your/etc/sysctl.conf file (setting value “vm.swappiness”)

You can check your swap file settings by running the following command

cat /proc/sys/vm/swappiness
1

Or By running

sysctl vm.swappiness
vm.swappiness = 1

Set a new swap file value by editing /etc/sysctl.conf

sudo nano /etc/sysctl.conf

Set the following to use more ram over the swap disk.

vm.swappiness = 1

Read about swappiness values here: https://en.wikipedia.org/wiki/Swappiness

Service Performance

Performance (and allocated resources) depends on the demands of your operating system and installed software

What operating system do you have?

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.4 LTS
Release:        16.04
Codename:       xenial

View NGINX Status, how much memory does it use?

/etc/init.d/nginx status
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-05-25 21:28:25 AEST; 1 weeks 3 days ago
     Docs: man:nginx(8)
 Main PID: #### (nginx)
    Tasks: 3
   Memory: 58.9M
      CPU: 33min 11.515s
   CGroup: /system.slice/nginx.service
           ├─#### nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           ├─#### nginx: worker process
           └─#### nginx: cache manager process

PHP (and Child Worker) status how much memory does it use and how many child workers do you have? Read my add PHP child workers post here (and update to PHP 7.2 here)

sudo service php7.2-fpm status
● php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
   Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-05-25 21:28:26 AEST; 1 weeks 3 days ago
     Docs: man:php-fpm7.2(8)
 Main PID: #### (php-fpm7.2)
   Status: "Processes active: 0, idle: 20, Requests: 75911, slow: 0, Traffic: 0.1req/sec"
    Tasks: 21
   Memory: 694.2M
      CPU: 20h 49min 45.132s
   CGroup: /system.slice/php7.2-fpm.service
           ├─ #### php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
           ├─ #### php-fpm: pool www-acc
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-acc
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           └─ #### php-fpm: pool www-usr

MySQL Status

sudo service mysql status
● mysql.service - MySQL Community Server
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-05-25 21:28:27 AEST; 1 weeks 3 days ago
 Main PID: ##### (mysqld)
    Tasks: 35
   Memory: 405.9M
      CPU: 2h 17min 31.822s
   CGroup: /system.slice/mysql.service
           └─#### /usr/sbin/mysqld

Shared VM Hosts

One of the biggest impacts (after server latency) for your server is not the disk performance but the number of hosts/websites on the server who are also using the disk and server resources.

Reverse IP Lookup

I have 80 other web servers on my server (based on a reverse lookup).

I may move to a dedicated box when I can afford it.

Security

Above all else ensure that security is number 1 priority and make performance second priority.

Scan your site with Zap, Qualys and Kali Linux. Performance means nothing if you are hacked.

website-report

Simulated concurrent users

You can use Siege to test the maximum concurrent users accessing your site before the server starts to drop connections.

FYI: If you use Cloudflare (you should) this may not work as it will block connections.

Install Siege

sudo apt-get install siege

Test  your server with 10 concurrent serves for 1 minute

siege -t1m c10 'https://yourserver.com/'

Results

siege -t1m c10 'https://yourserver.com/'
** SIEGE 3.0.8
** Preparing 15 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                    417 hits
Availability:                 100.00 %
Elapsed time:                  59.01 secs
Data transferred:               8.24 MB
Response time:                  1.62 secs
Transaction rate:               7.07 trans/sec
Throughput:                     0.14 MB/sec
Concurrency:                   11.46
Successful transactions:         417
Failed transactions:               0
Longest transaction:            2.26
Shortest transaction:           1.49

Keep upping the connections (from 10 above) to a limit where connections start dropping.

I tried 25 then 50 concurrent users hitting a server on Digital Ocean and it did not fail.

Conclusion

  • Choose a server near your customers
  • Change hosts if one is faster and cheaper
  • Measure or benchmark your server (and compare over time).
  • Use Cloudflare

Create your own server today

  • Create your own server on Vultr here.
  • Create your own server on Digital Ocean here.
  • Create your own server on UpCloud here.

And remember you can install the Runcloud server management dashboard here.

I hope this guide helps someone.

< Previous

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: Cloud, Digital Ocean, disk, Domain, Linux, NGINX, Performance, PHP, php72, Scalability, Scalable, Speed, Storage, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: and, can, comparing, Concurrent Users etc, cpu, digital ocean, Disk, How, Latency, measure, on, Performance, ubuntu, UpCloud - Part 4 of 4, vm, vultr, you

Moving an Ubuntu 16.04 VM on Vultr from one data centre to another via snapshots

April 17, 2018 by Simon

This guide will show how you can move an Ubuntu VM server domain between Vultr data centres via snapshots.

I have a number of guides on moving away from CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. Sometimes you need to move a sever between locations and/or upgrade the server (to have more memory t install WordPress).

Moving an existing Vultr server

If you don’t have an Ubuntu server click here (follow this guide).

Login to Vultr and specify a source server, click Snapshots and click Take Snapshot.

Make snapshot

Wait for the snapshot to finish (It may take 1 hour).

Snapshot Started

Great, the snapshot is done.

Snapshot Ready

Now I can create a new server (in a different data centre).

Add

Deploy New Instance

Choose a location (Australia is at capacity, so I’ll deploy to Silicon Valley then move again in a few weeks), choose the snapshot to restore, choose a plan, I enabled IPV6/Auto Backups and Private Networking.

TIP: The password for the server will be the same as the source server so write it down.

Deploy

Click Deploy Now

Deploy

After a few minutes, you can see the new servers IP address, you can log in to your domain name provider (in my case Namecheap) and update the target IPV4 and IPV6 address.

You can find IPV4 and IPV6 addresses by opening your server, clicking settings then IPxV4 or IPV6.

ip

You will need to update Vultr DNS settings (login to Vultr, Click Servers, Click DNS then edit your existing Domain DNS entry).  Add you’re new serves IP addresses.

Vultr DNS

Update: I added an IPV6/AAAA record too.

Wait for DNS Replication

Goto https://www.whatsmydns.net/ and check the global DNS propagation for your new domain’s server.

DNS Propigation

If you are happy that the server has been migrated (snapshot restored) and that the domain DNS is pointing to your new server you can delete the old server in the Vultr server list.

Servers

Post-Migrate Actions

  • Setup Daily backups.
  • Review firewall settings (guide here).
  • Optional: Install MySQL
  • Optional: Install PHP
  • Optional: Install PHP Pooled Connections
  • Optional: Install WordPress
  • Optional: Install WordPress CDN
  • Optional: Configure Cloudflare
  • etc

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.1 Vultr Link

v1.0 Initial post

Filed Under: Linux, Migrate, Server, Ubuntu, Vultr, Wordpress Tagged With: 16.04, an, another, center, data, from, Moving, New Jersey, on, snapshot, sydney, to, ubuntu, vm, vultr

Using the Qualys FreeScan Scanner to test your website for online vulnerabilities

March 23, 2018 by Simon

It is possible to deploy a server in minutes to hours but it can take days to secure.  What tools can you use to help identify what to secure on your website?

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line, installing a Free SSL certificate and setting up SSL security.

Security Tools

  • https://asafaweb.com/ is a good tool for quick scanning
  • Kali Linux has a number of security tools you can use.
  • You can run a system audit Lynis Audit.
  • Checking your site for vulnerabilities with Zap.
  • Run a Gravity Scan malware and supply chain scan
  • Use Qualys SSL scan to test your SSL certificate: https://www.ssllabs.com/ssltest/

Qualys

Qualys SSL Labs SSL Tester is the best tool for checking an SSL certificate strength

Most people don’t know Qualys also has another free (limited to 10 scans) vulnerability scanner for websites.

Goto https://freescan.qualys.com/ and click Start your free account.

Complete the signup form

Now check your email to login and confirm your email account

Login now from the email.

Create a password (why the 25 char max Qualys?)

Enter your website URL and click Scan

The scan can take hours

While the scan was being performed I noticed that Qualys offers alerts (I’ll check this out later): https://www.qualys.com/research/security-alerts/

Yes, the scan can take hours, take a walk or read other posts here.

The scan is almost complete

Yay, my latest scan revealed 0 High, 0 Medium and 0 Low-risk vulnerabilities.

It did report 23 informational alerts like “Firewall Detected“.

Threat Report Results

Patch Report Results

This report was empty (probably because I don’t run Windows)

Threat Report Results

The OWASP report contained partial scan results (maybe the full report is available to pro users)

Previous Scan Results

The Qualys dashboard will show all past scans.

My first scan showed a Low priority issue with the /wp-login.php page as the input fields did not have “autocomplete=”off””, I fixed this by adding “autocomplete=”off”” the removing the page (safer).

The second scan found two issues with cookies (possibly ad banner cookies) and 2 subfolders that I created in past development exercises. I deleted the two sub-folders that were not needed.

The third scan was clean.

Here is a scan of a static website of a friends server (static can be less secure if the server underneath is old or unpatched).

Static Website

Happy scanning. I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.1 Static Web Server Scan

v1.0 Initial post

Filed Under: Firewall, LetsEncrypt, Linux, Malware, Security, Server, Ubuntu, Vulnerabilities, Vulnerability, WP Security Tagged With: for, FreeScan, online, Qualys, Scanner, test, the, to, Using, Vulnerabilities, website, your

Setting up the Debian Kali Linux distro to perform penetration testing of your systems

March 7, 2018 by Simon

This post will show you how to setup the Kali Linux distro to perform penetration testing of your systems

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. Securing your systems is very important (don’t stop) and keep learning (securing ubuntu in the cloud, securing checklist, run a Lynis system audit etc)

snip from: https://www.kali.org/about-us/

“Kali Linux is an open source project that is maintained and funded by Offensive Security, a provider of world-class information security training and penetration testing services. In addition to Kali Linux, Offensive Security also maintains the Exploit Database and the free online course, Metasploit Unleashed.”

Download Kali

I downloaded the torrent version (as the HTTP version kept stopping (even on 50/20 NBN).

Download Kali

After the download finished I checked the SHA sum to verify it’a integrity

cd /Users/username/Downloads/kali-linux-2018.1-amd64
shasum -a 256 ./kali-linux-2018.1-amd64.iso 
ed88466834ceeba65f426235ec191fb3580f71d50364ac5131daec1bf976b317  ./kali-linux-2018.1-amd64.iso

A least it matched the known (or hacked) hash here.

Installing Parallels in a VM on OSX

I use Parallels 11 on OSX to set up a VM os Demina Kali, you can use VirtualBox, VMWare etc.

VM Setup in Parallels

Hardware: 2x CPU, 2048MB Ram, 32MB Graphics, 64GB Disk.

I selected Graphical Install (English, Australia, American English, host: kali, network: hyrule, New South Wales, Partition: Guided – entire disk, Default, Default, Default, Continue, Yes, Network Mirror: Yes, No Proxy, Installed GRUB bootloader on VM HD.

Post Install

Install Parallel Tools

Official Guide: https://kb.parallels.com/en/123968

I opened the VM then selected the Actions then Install Parallels Tools, this mounted /media/cdrom/, I copied all contents to /temp/

As recommended by the Parallels instal bash script I updated headers.

apt install linux-headers-4.14.0-kali1-amd64

Then the following from https://kb.parallels.com/en/123968

apt-get clean
apt-get update
apt-get upgrade -y
apt-get dist-upgrade -y
apt-get install dkms kpartx printer-driver-postscript-hp

Parallels will not install, I think I need to upgrade to parallel 12 or 12 as the printer driver detection is not detecting (even though it is installed).

Installing Google Chrome

I used the video below

I have to run chrome with

/usr/bin/gogole-chrome-stable %U --no-sandbox --user-data=dir &

It works.

Chrome

Running your first remote vulnerability scan in Kali

I found this video useful in helping me scan and check my systems for exploits

Simple exploit search in Armitage (metasploit)

Armitage Scan

A quick scan of my server revealed three ports open and (22, 80 and 443). Port 80 redirects to 443 and port 22 is firewalled.  I have WordPress and exploits I rued failed to work thanks to patching (always stay ahead of patching and updating of software and the OS.

k006-ports

Without knowing what I was doing I was able to check my WordPress against known exploits. 

If you open the Check Exploits menu at the end of the Attacks menu you can do a bulk exploit check.

kali_bulk

WP Scan

Kali also comes with a WordPress scanner

wpscan --url https://fearby.com

This will try and output everything from your web server and WordPress plugins.

/xmlrpc.php was found and I was advised to deny access to that file in NGINX. xmlrpc.php is ok but can be used in denial of service attacks.

location = /xmlrpc.php {
	deny all;
	access_log off;
	log_not_found off;
}

I had a hit for a vulnerability in a Youtube Embed plugin but I had a patched version.

k007-wpscan

TIP: Check your WordPress often.

More to come (Draft Post).

  • OWASP scanner
  • WPSCAN
  • Ethical Hacker modules
  • Cybrary training
  • Sent tips to @FearbySoftware

Tips

Don’t have unwanted ports open, securely installed software, Use unattended security updates in Ubuntu, update WordPress frequently and limit plugins and also consider running more verbose audit tools like Lynis.

More Reading

Read my OWASP Zap guide on application testing and Cloudflare guide.

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.2 added More Reading links.

v1.1 Added bulk exploit check.

v1.0 Initial post

Filed Under: Exploit, Linux, Malware, Security, Server, SSH, Vulnerability Tagged With: debian, distro, Kali, Linux, of, penetration, perform, Setting, systems, testing, the, to, up, your

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT