• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

NodeJS

HomePi – Raspberry PI powered touch screen showing information from house-wide sensors

March 14, 2022 by Simon

This post is a work in progress (14/3/2022, v0.9.63 – PCB’s v0.2 Designed and Ordered

Summary

After watching this video from Jeff Geerling (demonstrating how to build a Air Quality Sensor) I have decided to make 2. but why not build something bigger?

I want to make a RaspBerry Pi server with a touch screen to receive data from a dozen other WeMos Sensors that I will build.

The Plan

Below is a rough plan of what I am building

In a nutshell, it will be 20x WeMos Sensors recording

Picture of20x WeMoss Sensors, weather station and co2 sensors talking to an api that saves to MySQL then mysql being ready buy a webpage and touch screen panel

I ordered all the parts from Amazon, BangGood, AliExpress, eBay, Core Electronics and Kogan.

Fresh Bullseye Install (Buster upgrade failed)

On 21/11/2021 I tried to manually update Buster to Bullseye (without taking a backup first (bad idea)). I followed this guide to reinstall Rasbian from scratch (this with Bullseye)

Storage Type

Before I begin I need to decide on what storage media to use on the Raspberry Pi. I hate how unreliable and slow MicroSD cards. I tried using an old 128GB SATA SSD, a 1TB Magnetic Hard Drive, a SATA M.2 SSD and NVME M.2 in a USB caddy.

I decided to use a spare 250GB SATA based M.2 Solid State from my son’s PC in Geekworm X862 SATA M.21 Expansion board.

With this board I can bolt the M.2 Solid State Drive into a expansion board under the pi and Power it from the RaspBerry Pi USB Port.

Nice and tidy

I zip-tied a fan to the side of the boards to add a little extra airflow over the solid state drive

32Bit, 64Bit, Linux or Windows

Before I begin I set up Raspbian on an empty Micro SD card (just to boot it up and flash the firmware to the latest version). This is very easy and documented elsewhere. I needed the latest firmware to ensure boort from USB Drive (not Micro SD card was working).

I ran rpi-update and flashed the latest firmware onto my Raspberry Pi. Good, write up here.

When my Raspberry Pi had the latest firmware I used the Raspberry Pi Imager to install the 32 Bit Raspberry Pi OS.

I do have a 8GB Raspberry Pi 4 B, 64Bit Operating Systems do exist but I stuck with 32 bit for compatibility.

Ubuntu 64bit for Raspberry Pi Links

  • Install Ubuntu on a Raspberry Pi | Ubuntu
    • Server Setup Links
      • How to install Ubuntu Server on your Raspberry Pi | Ubuntu
    • Desktop Setup Links
      • How to install Ubuntu Desktop on Raspberry Pi 4 | Ubuntu

Windows 10 for Raspberry Pi Links
https://docs.microsoft.com/en-us/windows/iot-core/tutorials/quickstarter/prototypeboards
https://docs.microsoft.com/en-us/answers/questions/492917/how-to-install-windows-10-iot-core-on-raspberry-pi.html
https://docs.microsoft.com/en-us/windows/iot/iot-enterprise/getting_started

Windows 11 for Raspberry Pi Links
https://www.youtube.com/user/leepspvideo
https://www.youtube.com/watch?v=WqFr56oohCE
https://www.worproject.ml

Setting up the Raspberry Pi Server

These are the steps I used to setup my Pi

Dedicated IP

Before I began I ran ifconfig on my Pi to obtain my Raspberry Pi’s wireless cards mac address. I logged into my Router and setup a dedicated IP (192.168.0.50), this way I can have a IP address thta remains the same.

Hostname

I set my hostname here

sudo nano /etc/hosts
sudo nano /etc/hostname

I verified my hostname with this command

hostname

I verified my IP address with this command

hostname -I

Samba Share

I setup the Samba service to allow me to copy files to and from the Pi

sudo apt-get install samba samba-common-bin
sudo apt-get update

I made a folder to share files

 mkdir ~/share

I edited the Samba config file

sudo nano /etc/samba/smb.conf

In the config file I set my workgroup settings


workgroup = Hyrule
wins support = yes

I defined a share at the bottom of the config file (and saved)

[PiShare]
comment=Raspberry Pi Share
path=/home/pi/share
browseable=Yes
writeable=Yes
only guest=no
create mask=0777
directory mask=0777
public=no

I set a smb password

sudo smbpasswd -a pi
New SMB password: ********
Retype new SMB password: ********

I tested the share froma Windows PC

And the share is accessible on the Raspberry Pi

Great, now I can share files with drag and drop (instead of via SCP)

Mono

I know how to code C# Windows Executables, I have 25 years experince. I do nt want to learn Java or Python to code a GUI application for a touch screen if possible.

I setup Mono from Home | Mono (mono-project.com) to be anbe to run Windows C# EXE’s on Rasbian

sudo apt install apt-transport-https dirmngr gnupg ca-certificates

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb https://download.mono-project.com/repo/debian stable-raspbianbuster main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list

sudo apt update

sudo apt install mono-devel

I copied an EXE I wrote in C# on Windows and ran it with Mono

sudo mono ~/HelloWorld.exe
Exe Test OK

This worked.

Nginx Web Server

I Installed NginX and configured it

sudo apt-get install nginx

I created a /www folder for nginx

sudo mkdir /www

I created a place-holder file in the www root

sudo nano /wwww/index.html

I set permissions to allow Nginx to access /www

sudo chown -R www-data:www-data /www

I edited the NginX config as required

sudo nano /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/nginx.conf 

I tested and reloaded the nginx config


sudo nginx -t
sudo nginx -s reload
sudo systemctl start nginx

I started NginX

sudo systemctl start nginx

I tested nginx in a web browser

NodeJS/NPM

I installed NodeJS

sudo apt update
sudo apt install nodejs npm -y

I verified Node was installed

nodejs --version
> v12.22.5

PHP

I installed PHP

sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update

sudo apt install -y php8.0-common php8.0-cli php8.0-xml

I verified PHP with this command

php --version

> PHP 8.0.13 (cli) (built: Nov 19 2021 06:40:53) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.13, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.13, Copyright (c), by Zend Technologies

I installed PHP-FPM

sudo apt-get install php8.0-fpm

I verified the PHP FPM sock was available before adding it to the NGINX Config

sudo ls /var/run/php*/**.sock
> /var/run/php/php8.0-fpm.sock  /var/run/php/php-fpm.sock

I reviewed PHP Settings

sudo nano /etc/php/8.0/cli/php.ini 
sudo nano /etc/php/8.0/fpm/php.ini

I created a /www/ppp.php file with this contents

<?php
  phpinfo(); // all info
  // module info, phpinfo(8) identical
  phpinfo(INFO_MODULES);
?>

PHP is working

PHP Test OK

I changed these php.ini settings (fine for local development).

max_input_vars = 1000
memory_limit = 1024M
max_file_uploads = 20M
post_max_size = 20M
display_errors = on

MySQL Database

I installed MariaDB

sudo apt install mariadb-server

I updated my pi password

passwd

I ran the Secure MariaDB Program

sudo mysql_secure_installation

After setting each setting I want to run mysql as root to test mysql

PHPMyAdmin

I installed phpmyadmin to be able to edit mysql databases via the web

I followed this guide to setup phpmyadmin via lighthttp and then via nginx

I then logged into MySQL, set user permissions, create a test database and changes settings as required.

NginX to NodeJS API Proxy

I edited my NginX config to create a NodeAPI Proxy

Test Webpage/API

Todo

I installed PM2 the NodeJS agent software

sudo npm install -g pm2 

Node apps can be started as a service from cli

pm2 start api_v1.js

PM2 status

pm2 status

You can delete node apps from PM2 (if desired)

pm2 delete api_v1.js

Sending Email from CLI

I setup send email to allow emails to be sent from the cli with these commands

 sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail  

I logged into my GSuite account and setup an alias and app password to use.

Now I can send emails from the CLI

sudo sendemail -f [email protected] -t [email protected] -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp **************

I added this to a Bash script (“/Scripts/Up.sh”) and added an event to send an email every 6 hours

7 Inch Full View LCD IPS Touch Screen 1024*600 

I purchased a 7″ Touch screen from Banggood. I got a head up from the following Video.

I plugged in the touch USB cable to my Pi’s USB3 port. I pliugged the HDMI adapter into the screen and the pi (with the supplied mini plug).

I turned on the pi and it work’s and looks amazing.

This is displaying a demo C# app I wrote. It’s running via mono.

I did have to add the following to config.txt to bet native resolution. The manual on the supplied CD was helpful (but I did not check it at first).

max_usb_current=1
hdmi_force_hotplug=1
config_hdmi_boost=7
hdmi_group=2
hdmi_mode=1
hdmi_mode=87 
hdmi_drive=1
display_rotate=0
hdmi_cvt 1024 600 60 6 0 0 0
framebuffer_width=1024
framebuffer_height=600

PiJuice UPS HAT

I purchased an external LiPi UPS to keep the raspberry pi fed with power (even when the power goes out)

The stock battery was not charged and was quite weak when I first installed it. Do fully charge the battery before testing.

PiJuice

Stock Battery = 3.7V @ 1820mAh

Stock Battery = 3.7V @ 1820mAh

Below are screenshots so the PIJuice Setup.

PiJuice HAT Settings

PiJuice General Settings

General Settings

There is an option to set events for every button

Extensive screen to set button events

LED Status color and function

Set LED status and color

IO for the PiJuice Input. I will sort this out later.

PiJuice IO settings

A new firmware was available. I had v1.4

Update firmware screen

I updated the firmware

Firmware update worked

Firmware flash success

Battery settings

Battery Settings

PiJuice Button Config

Button config

Wake Up Alarm time and RTC

Clock Settings

System Settings

System Settings

System Events

system settings page

User Scripts

Define user scripts

I ordered a bigger battery as my Screen, M.2, Fan and UPS consume near the maximum of the stock battery.

10,000mAh battery

After talking with the seller of the battery they advised I setup the 10,000mAh battery using the 1,000mAh battery setup in PiJuice but change the Capacity and Charge Current

  • Capacity = 10000C
  •  cutoff voltage

And for battery longevity set the 

  • Cutoff voltage: 3,250mv

Final Battery Setup

Battery settings based off 1000mAh battery profile , Capacity 10,000 mAh, Charge current 850 and Cutoff 3250mV

WeMos Setup

I orderd 20x Wemos Mini D1 Pro (16Mbit) clones to use to run the sensors. I soldered the legs on in batches of 8

WeMos installed on breadboards ready to solder pins

Soldering was not perfect but worked

20x soldered wemos

Soldering is not perfect but each joint was triple tested.

Close up of soldered joints

I only had one dead WeMos.

I will set up the final units on ProtoBoards.

Protoboard

20x Wemos ready for service and the external aerial is glued down. The hot glue was a bad idea, I had to rotate a resistor under the hot glue.

20x wemos ready.

Revision

I ended up reordering the WeMos Mini’s and soldering on Female headers so I can add OLED screens

air mon enclosure

I added female headers to allow an OLED screen

new wemos

I purchased a microscope tpo be able to see better.

microscope

Each sensor will have a mini OLED screen.

mini oled screen

0.66″ OLED Screens

oled screen

I designed a PCB in Photoshop and had it turned into a PCB via https://www.fiverr.com/syedzamin12. I ordered 30x bloards from https://jlcpcb.com/

Custom PCB

The PCB’s fit inside the new enclosure perfectly

I am waiting for smaller screws to arrive.

PCB v0.2

I decided to design a board with 2 switches (and a light sensor to turn the screen off at night)

Breadboard Prototype

Prototype

I spoke to https://www.fiverr.com/syedzamin12 and withing 24 hours a PCB was designed

I Layers

This time I will get a purple PCB from JLCPCB and add a dinosaur for my son

Top PCB View

TOP PCB View

Back PCB View

Back PCB View

3D PC View

3D PCB view

JLCPCB made the board in 3 days

3 days

Now I need to wait a few weeks for the new PCB to arrive

Also, I finsihed the firmware for v0.2 PCB

I ordered some switches

I also ordered some reset buttons

I might add a larger 0.96″ OLED screen

Wifi and Static IP Test

I uploaded a skepch to each WeMos and tested the Wifi and Static IP thta was given.

Sketch

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>


#define SERVER_IP "192.168.0.50"

#ifndef STASSID
#define STASSID "wifi_ssid_name"
#define STAPSK  "************"
#endif

void setup() {

  Serial.begin(115200);

  Serial.println();
  Serial.println();
  Serial.println();

  WiFi.begin(STASSID, STAPSK);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected! IP address: ");
  Serial.println(WiFi.localIP());

}

void loop() {
  // wait for WiFi connection
  if ((WiFi.status() == WL_CONNECTED)) {

    WiFiClient client;
    HTTPClient http;

    Serial.print("[HTTP] begin...\n");
    // configure traged server and url
    http.begin(client, "http://" SERVER_IP "/api/v1/test"); //HTTP
    http.addHeader("Content-Type", "application/json");

    Serial.print("[HTTP] POST...\n");
    // start connection and send HTTP header and body
    int httpCode = http.POST("{\"hello\":\"world\"}");

    // httpCode will be negative on error
    if (httpCode > 0) {
      // HTTP header has been send and Server response header has been handled
      Serial.printf("[HTTP] POST... code: %d\n", httpCode);

      // file found at server
      if (httpCode == HTTP_CODE_OK) {
        const String& payload = http.getString();
        Serial.println("received payload:\n<<");
        Serial.println(payload);
        Serial.println(">>");
      }
    } else {
      Serial.printf("[HTTP] POST... failed, error: %s\n", http.errorToString(httpCode).c_str());
    }

    http.end();
  }

  delay(1000);
}

The Wemos booted, connected to WiFi, set and IP, and tried to post a request to a URL.

........................................................
Connected! IP address: 192.168.0.51
[HTTP] begin...
[HTTP] POST...
[HTTP] POST... failed, error: connection failed

The POST failed because my PI API Server was off.

Touch Screen Enclosure

I constructed a basic enclosure and screwed the touch screen to it. I need to find  aflexible black scrip to put around the screen and cover up the gaps.

Wooden box with the screen in it

The touch screen has been screwed in.

Screen screwed in

Over the Air Updating

I followed this guide and having the WeMos updatable over WiFi.

Basically, I installed the libraries “AsyncHTTPSRequest_Generic”, “AsyncElegantOTA”, “AsyncHTTPRequest_Generic”, “ESPAsyncTCP” and “ESPAsyncWebServer”.

Manage Libraries

A few libraries would not download so I manually downloaded the code from the GitHub repository from Confirm your account recovery settings (github.com) and then extracted them to my Documents\Arduino\libraries folder.

I then opened the exampel project “AsyncElegantOTA\ESP8266_Async_Demo”

I reviewed the code

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>

const char* ssid = "........";
const char* password = "........";

AsyncWebServer server(80);


void setup(void) {
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request) {
    request->send(200, "text/plain", "Hi! I am ESP8266.");
  });

  AsyncElegantOTA.begin(&server);    // Start ElegantOTA
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();
}
I added my Wifi SSID and password, saved the project and compiled a the code and wrote it to my WeMos Mini D1

I added LED Blink Code

void setup(void) {
  ...
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  ...
}
void loop(void) {
 ...
  delay(1000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(1000);                      // Wait for two seconds (to demonstrate the active low LED)
 ...
}

I compiled and tested the code

Now to get new code changes to the WeMos Mini via a binary, I edited the code (chnaged the LED blink speed) and clicked “Export Compiled Binary”

Compole Binary

When the binary compiled I opened the Sketch Folder

Show Sketch folder

I could see a bin file.

Bin File

I loaded the http://192.168.0.51/update and selected the bin file.

The new firmwaere applied.

Flashing

I navighated back to http://192.168.0.51

TIP: Ensure you add the starter sketch that has your wifi details in there.

Password Protection

I changed the code to add a basic passeord on access ad on OTA update

#include <ESP8266WiFi.h>
#include <ESPAsyncTCP.h>
#include <ESPAsyncWebServer.h>
#include <AsyncElegantOTA.h>


//Saved Wifi Credentials (Research Encruption Later or store in FRAM Module?
const char* ssid = "your-wifi-ssid";
const char* password = "********";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(80);

void setup(void) {
  Serial.begin(115200);       //Serial Mode (Debug)
    
  WiFi.mode(WIFI_STA);        //Client Mode
  WiFi.begin(ssid, password); //Connect to Wifi
 
  Serial.println("");

  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success! ESP8266 #001.");
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);

  
  server.begin();
  Serial.println("HTTP server started");
}

void loop(void) {
  AsyncElegantOTA.loop();

  digitalWrite(LED_BUILTIN, LOW);
  delay(8000);                      // Wait for a second
  digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
  delay(8000);                      // Wait for two seconds (to demonstrate the active low LED)

}

Password prompt for users accessing the device.

Login scree

Password prompt for admin users accessing the device.

admin password protect

Later I will research encrypting the password and storing it on SPIFFS partition or a FRAM memory module.

Adding the DHT22 Sensors

I received my paxckl of DHT22 Sensors (AMT2302).

Specifications

  • Operating Voltage: 3.5V to 5.5V
  • Operating current: 0.3mA (measuring) 60uA (standby)
  • Output: Serial data
  • Temperature Range: 0°C to 50°C
  • Humidity Range: 20% to 90%
  • Resolution: Temperature and Humidity both are 16-bit
  • Accuracy: ±1°C and ±1%

I wired it up based on this Adafruit post.

DHT22 Wired Up on a breadboard.

DHT22 and Basic API Working

I will not bore you with hours or coding and debugging so here is my code thta

  • Allows the WeMos D1 Mini Prpo (ESP8266) to connect to WiFi
  • Web Server (with stats)
  • Admin page for OTA updates
  • Password Prpotects the main web folder and OTA admin page
  • Reading DHT Sensor values
  • Debug to serial Toggle
  • LED activity Toggle
  • Json Serialization
  • POST DHT22 data to an API on the Raspberry PI
  • Placeholder for API return values
  • Automatically posts data to the API ever 10 seconds
  • etc

Here is the work in progress ESP8288 Code

#include <ESP8266WiFi.h>        // https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/src/ESP8266WiFi.h
#include <ESPAsyncTCP.h>        // https://github.com/me-no-dev/ESPAsyncTCP
#include <ESPAsyncWebServer.h>  // https://github.com/me-no-dev/ESPAsyncWebServer
#include <AsyncElegantOTA.h>    // https://github.com/ayushsharma82/AsyncElegantOTA
#include <ArduinoJson.h>        // https://github.com/bblanchon/ArduinoJson
#include "DHT.h"                // https://github.com/adafruit/DHT-sensor-library
                                // Written by ladyada, public domain

//Todo: Add Authentication
//Fyi: https://api.gov.au/standards/national_api_standards/index.html

#include <ESP8266HTTPClient.h>  //POST Client

//Firmware Stats
bool bDEBUG = true;        //true = debug to Serial output
                           //false = no serial output
//Port Number for the Web Server
int WEB_PORT_NUMBER = 1337; 

//Post Sensor Data Delay
int POST_DATA_DELAY = 10000; 

bool bLEDS = true;         //true = Flash LED
                           //false =   NO LED's
//Device Variables
String sDeviceName = "ESP-002";
String sFirmwareVersion = "v0.1.0";
String sFirmwareDate = "27/10/2021 23:00";

String POST_SERVER_IP = "192.168.0.50";
String POST_SERVER_PORT = "";
String POST_ENDPOINT = "/api/v1/test";

//Saved Wifi Credentials (Research Encryption later and store in FRAM Module?
const char* ssid = "your_wifi_ssid";
const char* password = "***************";

//Credentials for the regular user to access "http://{ip}:{port}/"
const char* http_username = "user";
const char* http_password = "********";

//Credentials for the admin user to access "http://{ip}:{port}/update/"
const char* http_username_admin = "admin";
const char* http_password_admin = "********";

//Define the Web Server Object
AsyncWebServer server(WEB_PORT_NUMBER);    //Feel free to chnage the port number

//DHT22 Temp Sensor
#define DHTTYPE DHT22   // DHT 22  (AM2302), AM2321
#define DHTPIN 5
DHT dht(DHTPIN, DHTTYPE);

//Common Variables
String thisBoard = ARDUINO_BOARD;
String sHumidity = "";
String sTempC = "";
String sTempF = "";
String sJSON = "{ }";

//DHT Variables
float h;
float t;
float f;
float hif;
float hic;


void setup(void) {

  //Turn On PIN
  pinMode(LED_BUILTIN, OUTPUT);     // Initialize the LED_BUILTIN pin as an output
  
  //Serial Mode (Debug)
  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  if (bDEBUG) Serial.begin(115200);
  if (bDEBUG) Serial.println("Serial Begin");

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }
  if (bDEBUG) Serial.println("Wifi Setup");
  if (bDEBUG) Serial.println(" - Client Mode");
  
  WiFi.mode(WIFI_STA);        //Client Mode
  
  if (bDEBUG) Serial.print(" - Connecting to Wifi: " + String(ssid));
  WiFi.begin(ssid, password); //Connect to Wifi
 
  if (bDEBUG) Serial.println("");
  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    if (bDEBUG) Serial.print(".");
  }
  if (bDEBUG) Serial.println("");
  if (bDEBUG) Serial.print("- Connected to ");
  if (bDEBUG) Serial.println(ssid);
  
  if (bDEBUG) Serial.print("IP address: ");
  if (bDEBUG) Serial.println(WiFi.localIP());

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }

  
  // HTTP basic authentication on the root webpage
  server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    
        String sendHtml = "";
        sendHtml = sendHtml + "<html>\n";
        sendHtml = sendHtml + " <head>\n";
        sendHtml = sendHtml + " <title>ESP# 002</title>\n";
        sendHtml = sendHtml + " <meta http-equiv=\"refresh\" content=\"5\";>\n";
        sendHtml = sendHtml + " </head>\n";
        sendHtml = sendHtml + " <body>\n";
        sendHtml = sendHtml + " <h1>ESP# 002</h1>\n";
        sendHtml = sendHtml + " <u2>Debug</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Device Name: " + sDeviceName + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Version: " + sFirmwareVersion + " </li>\n";
        sendHtml = sendHtml + " <li>Firmware Date: " + sFirmwareDate + " </li>\n";
        sendHtml = sendHtml + " <li>Board: " + thisBoard + " </li>\n";
        sendHtml = sendHtml + " <li>Auto Refresh Root: On </li>\n";
        sendHtml = sendHtml + " <li>Web Port Number: " + String(WEB_PORT_NUMBER) +" </li>\n";
        sendHtml = sendHtml + " <li>Serial Debug: " + String(bDEBUG) +" </li>\n";
        sendHtml = sendHtml + " <li>Flash LED's Debug: " + String(bLEDS) +" </li>\n";
        sendHtml = sendHtml + " <li>SSID: " + String(ssid) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT TYPE: " + String(DHTTYPE) +" </li>\n";
        sendHtml = sendHtml + " <li>DHT PIN: " + String(DHTPIN) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_DATA_DELAY: " + String(POST_DATA_DELAY) +" </li>\n";

        sendHtml = sendHtml + " <li>POST_SERVER_IP: " + String(POST_SERVER_IP) +" </li>\n";
        sendHtml = sendHtml + " <li>POST_ENDPOINT: " + String(POST_ENDPOINT) +" </li>\n";
        
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>Sensor</h2>";
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <li>Humidity: " + sHumidity + "% </li>\n";
        sendHtml = sendHtml + " <li>Temp: " + sTempC + "c, " + sTempF + "f. </li>\n";
        sendHtml = sendHtml + " <li>Heat Index: " + String(hic) + "c, " + String(hif) + "f.</li>\n";
        sendHtml = sendHtml + " </ul>\n";
        sendHtml = sendHtml + " <u2>JSON</h2>";
        
        // Allocate the JSON document Object/Memory
        // Use https://arduinojson.org/v6/assistant to compute the capacity.
        StaticJsonDocument<250> doc;
        //JSON Values     
        doc["Name"] = sDeviceName;
        doc["humidity"] = sHumidity;
        doc["tempc"] = sTempC;
        doc["tempf"] = sTempF;
        doc["heatc"] = String(hic);
        doc["heatf"] = String(hif);
        
        sJSON = "";
        serializeJson(doc, sJSON);
        
        sendHtml = sendHtml + " <ul>" + sJSON + "</ul>\n";
        
        sendHtml = sendHtml + " <u2>Seed</h2>";
        long randNumber = random(100000, 1000000);
        sendHtml = sendHtml + " <ul>\n";
        sendHtml = sendHtml + " <p>" + String(randNumber) + "</p>\n";
        sendHtml = sendHtml + " </ul>\n";
       
        sendHtml = sendHtml + " </body>\n";
        sendHtml = sendHtml + "</html>\n";
        //Send the HTML   
        request->send(200, "text/html", sendHtml);
  });

  //This is the OTA Login
  AsyncElegantOTA.begin(&server, http_username_admin, http_password_admin);
  
  server.begin();
  if (bDEBUG) Serial.println("HTTP server started");
 
  if (bDEBUG) Serial.println("Board: " + thisBoard);

  //Setup the DHT22 Object
  dht.begin();
  
}

void loop(void) {

  AsyncElegantOTA.loop();

  //Debug LED Flash
  if (bLEDS) {
    digitalWrite(LED_BUILTIN, LOW);
    delay(100);                      // Wait for a second
    digitalWrite(LED_BUILTIN, HIGH);  // Turn the LED off by making the voltage HIGH
    delay(100);                      // Wait for two seconds (to demonstrate the active low LED)    
  }


  //Display Temp and Humidity Data

  h = dht.readHumidity();
  t = dht.readTemperature();
  f = dht.readTemperature(true);

  // Check if any reads failed and exit early (to try again).
  if (isnan(h) || isnan(t) || isnan(f)) {
    if (bDEBUG) Serial.println(F("Failed to read from DHT sensor!"));
    return;
  }
  
  hif = dht.computeHeatIndex(f, h);         // Compute heat index in Fahrenheit (the default)
  hic = dht.computeHeatIndex(t, h, false);  // Compute heat index in Celsius (isFahreheit = false)

  if (bDEBUG) Serial.print(F("Humidity: "));
  if (bDEBUG) Serial.print(h);
  if (bDEBUG) Serial.print(F("%  Temperature: "));
  if (bDEBUG) Serial.print(t);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(f);
  if (bDEBUG) Serial.print(F("°F  Heat index: "));
  if (bDEBUG) Serial.print(hic);
  if (bDEBUG) Serial.print(F("°C "));
  if (bDEBUG) Serial.print(hif);
  if (bDEBUG) Serial.println(F("°F"));

  //Save for Page Load
  sHumidity = String(h,2);
  sTempC = String(t,2);
  sTempF = String(f,2);

  //Post to Pi API
    // Allocate the JSON document Object/Memory
    // Use https://arduinojson.org/v6/assistant to compute the capacity.
    StaticJsonDocument<250> doc;
    //JSON Values     
    doc["Name"] = sDeviceName;
    doc["humidity"] = sHumidity;
    doc["tempc"] = sTempC;
    doc["tempf"] = sTempF;
    doc["heatc"] = String(hic);
    doc["heatf"] = String(hif);
    
    sJSON = "";
    serializeJson(doc, sJSON);

    //Post to API
    if (bDEBUG) Serial.println(" -> POST TO API: " + sJSON);

   //Test POST
  
    if ((WiFi.status() == WL_CONNECTED)) {
  
      WiFiClient client;
      HTTPClient http;
  
    
      if (bDEBUG) Serial.println(" -> API Endpoint: http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT);
      http.begin(client, "http://" + POST_SERVER_IP + POST_SERVER_PORT + POST_ENDPOINT); //HTTP


      if (bDEBUG) Serial.println(" -> addHeader: \"Content-Type\", \"application/json\"");
      http.addHeader("Content-Type", "application/json");
  
      // start connection and send HTTP header and body
      int httpCode = http.POST(sJSON);
      if (bDEBUG) Serial.print("  -> Posted JSON: " + sJSON);
  
      // httpCode will be negative on error
      if (httpCode > 0) {
        // HTTP header has been send and Server response header has been handled

  
        //See https://api.gov.au/standards/national_api_standards/api-response.html 
        // Response from Server
        if (bDEBUG) Serial.println("  <- Return Code: " + httpCode);
                
        //Get the Payload
        const String& payload = http.getString();
          if (bDEBUG) Serial.println("   <- Received Payload:");
          if (bDEBUG) Serial.println(payload);
          if (bDEBUG) Serial.println("   <- Payload (httpcode: 201):");
          

         //Hnadle the HTTP Code
        if (httpCode == 200) {
          if (bDEBUG) Serial.println("  <- 200: Invalid API Call/Response Code");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 201) {
          if (bDEBUG) Serial.println("  <- 201: The resource was created. The Response Location HTTP header SHOULD be returned to indicate where the newly created resource is accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 202) {
          if (bDEBUG) Serial.println("  <- 202: Is used for asynchronous processing to indicate that the server has accepted the request but the result is not available yet. The Response Location HTTP header may be returned to indicate where the created resource will be accessible.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 400) {
          if (bDEBUG) Serial.println("  <- 400: The server cannot process the request (such as malformed request syntax, size too large, invalid request message framing, or deceptive request routing, invalid values in the request) For example, the API requires a numerical identifier and the client sent a text value instead, the server will return this status code.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 401) {
          if (bDEBUG) Serial.println("  <- 401: The request could not be authenticated.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 403) {
          if (bDEBUG) Serial.println("  <- 403: The request was authenticated but is not authorised to access the resource.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 404) {
          if (bDEBUG) Serial.println("  <- 404: The resource was not found.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 415) {
          if (bDEBUG) Serial.println("  <- 415: This status code indicates that the server refuses to accept the request because the content type specified in the request is not supported by the server");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 422) {
          if (bDEBUG) Serial.println("  <- 422: This status code indicates that the server received the request but it did not fulfil the requirements of the back end. An example is a mandatory field was not provided in the payload.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }
        if (httpCode == 500) {
          if (bDEBUG) Serial.println("  <- 500: An internal server error. The response body may contain error messages.");
          if (bDEBUG) Serial.println("  <- " + payload);
        }

        
      } else {
        if (bDEBUG) Serial.println("   <- Unknown Return Code (ERROR): " + httpCode);
        //if (bDEBUG) Serial.printf("    " + http.errorToString(httpCode).c_str());
        
      }

    }

    if (bDEBUG) Serial.print("\n\n");

    delay(POST_DATA_DELAY);
  }

Here is a screenshot of the Arduino IDE Serial Monitor debugging the code

Serial Monitor

Here is a screenshot of the NodeJS API on the raspberry Pi accepting the POSTed data from the ESP8266

API receiving data

Here is a sneak peek of the code accpeing the Posted Data

API COde

The final code will be open sourced.

API with 2x sensors (18x more soon)

I built 2 sensors (on Breadboards) to start hitting the API

2 sensors on a breadboard

18 more sensors are ready for action (after I get tempporary USB power sorted)

18x Sensors

PiJuice and Battery save the Day

I accidentally used my Pi for a few hours (to develop the API) and I realised the power to the PiJuice was not connected.

The PiJuice worked a treat and supplied the Pi from battery

Battery power was disconnected

I plugged in the battery after 25% was drained.

Power Restored/

Research and Setup TRIM/Defrag on the M.2 SSD

Todo: Research

Add a Buzzer to the RaspBerry Pi and Connect to Pi Juice No Power Event

Todo

Wire Up a Speaker to the PiJuice

Todo: Figure out cusrom scripts and add a Piezo Speaker to the PiJuice to alert me of issues in future.

Add buttons to the enclosure

Todo

Add email alerts from the system

I logged into Google G-Suite (my domain’s email provider) and set up an email alias for my domain “[email protected]”, I added this alias to GMail (logged in with my GSuite account.

I created an app-specific password at G-Suite to allow my poi to use a dedicated password to access my email.

I installed these packages on the Raspberry Pi

sudo apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail    

I can run this command to send an email to my primary email

sudo sendemail -f [email protected] -t [email protected]_domain.com -u "Test Email From PiHome" -m "Test Email From PiHome" -s smtp.gmail.com:587 -o tls=yes -xu [email protected]_domain.com -xp ********************

The email arrives from the Raspberry Pi

Test Email Screenshot

PiJuice Alerts (email)

In created some python scripts and configured PiJuice to Email Me

user scripts

I assigned the scripts to Events

Added functions

Python Script (CronJob) to email the batteruy level every 6 hours

Todo

Building the Co2/PM2.5 Sensors

Todo: (Waiting for parts)

The AirGradient PCB’s have arrived

Air Gradient PCB's

NodeJS API writing to MySQL/Influx etc

Todo: Save Data to a Database

Setup 20x WeMos External Antennae’s (DONE, I ordered new factory rotated resistors)

I assumed the external antennae’s on the WeMos D1 Mini Pro’s were using the external antennae. Wrong.

I have to move 20x resistors (1 per WeMos) to switch over the the external antennae.

This will be fun as I added hot glue over the area where the resistior is to hold down the antennae.

Reading configuration files via SPIFFS

Todo

Power over Ethernet (PoE) (SKIP, WIll use plain old USB wall plugs)

Todo: Passive por PoE

Building a C# GUI for the Touch Panel

Todo (Started coding this)

Todo (Passive POE, 5v, 3.3v)?

Building the enclosures for the sensorsDesigned and ordered the PCB, FIrmware next.

Custom PCB?

Yes, See above

Backing up the Raspberry Pi M.2 Drive

This is quite easy as the M.2 Drive is connected to a USB Pliug. I shutdown the Pi and pugged int he M.2 board to my PC

I then Backed up the entire disk to my PC with Acronis Software (review here)

I now have a complete backup of my Pi on a remote network share (and my primary pc).

Version History

v0.9.63 – PCB v0.2 Designed and orderd.

v0.9.62 – 3/2/2022 Update

v0.9.61 – New Nginx, PHP, MySQL etc

v0.9.60 – Fresh Bullseye install (Buster upgrade failed)

v0.951 Email Code in PiJUice

v0.95 Added Email Code

v0.94 Added Todo Areas.

v0.93 2x Sensors hitting the API, 18x sensors ready, Air Gradient

v0.92 DHT22 and Basic API

v0.91 Password Protection

v0.9 Final Battery Setup

v0.8 OTA Updates

v0.7 Screen Enclosure

v0.6 Added Wifi Test Info

v0.5 Initial Post

Filed Under: Analytics, API, Arduino, Cloud, Code, GUI, IoT, Linux, MySQL, NGINX, NodeJS, OS Tagged With: api, ESP8266, MySQL, nginx, raspberry pi, WeMos

Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

April 10, 2018 by Simon

This guide will help you install and setup the pm2 NodejJS process monitor PM2 from Keymetrics.io for free and manage your node apps performance and exceptions.

What is PM2?

PM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks. This is the steps I used on Ubuntu 16.04. This is NOT a paid endorsement (just self-documenting).

Key Features of PM2

PM2 offers web-based monitoring dashboard, exception reporting, load balancer, CPU and memory monitoring, transaction tracer and much more for NodeJS apps.

pm2-features

What is PM2?

Official page: http://pm2.keymetrics.io/

More info https://www.npmjs.com/package/pm2

Install PM2

npm install pm2 -g

Install Output

npm install pm2 -g
/usr/bin/pm2 -> /usr/lib/node_modules/pm2/bin/pm2
/usr/bin/pm2-dev -> /usr/lib/node_modules/pm2/bin/pm2-dev
/usr/bin/pm2-docker -> /usr/lib/node_modules/pm2/bin/pm2-docker
/usr/bin/pm2-runtime -> /usr/lib/node_modules/pm2/bin/pm2-runtime
/usr/lib
└─┬ [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ ├── [email protected]
  │ ├── [email protected]
  │ ├─┬ [email protected]
  │ │ └── [email protected]
  │ ├── [email protected]
  │ └── [email protected]
  ├─┬ [email protected]
  │ ├─┬ [email protected]
  │ │ └─┬ [email protected]
  │ │   ├── [email protected]
  │ │   ├─┬ [email protected]
  │ │   │ └─┬ [email protected]
  │ │   │   ├── [email protected]
  │ │   │   └── [email protected]
  │ │   ├─┬ [email protected]
  │ │   │ ├── [email protected]
  │ │   │ └─┬ [email protected]
  │ │   │   └── [email protected]
  │ │   ├─┬ [email protected]
  │ │   │ ├─┬ [email protected]
  │ │   │ │ └─┬ [email protected]
  │ │   │ │   ├── [email protected]
  │ │   │ │   └── [email protected]
  │ │   │ ├─┬ [email protected]
  │ │   │ │ ├── [email protected]
  │ │   │ │ ├── [email protected]
  │ │   │ │ ├── [email protected]
  │ │   │ │ └── [email protected]
  │ │   │ └── [email protected]
  │ │   ├── [email protected]
  │ │   ├── [email protected]
  │ │   ├─┬ [email protected]
  │ │   │ ├─┬ [email protected]
  │ │   │ │ └── [email protected]
  │ │   │ └── [email protected]
  │ │   ├── [email protected]
  │ │   └── [email protected]
  │ ├── [email protected]
  │ ├─┬ [email protected]
  │ │ ├── [email protected]
  │ │ ├── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ └── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├─┬ [email protected]
  │ │ │ │ └─┬ [email protected]
  │ │ │ │   └── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ └── [email protected]
  │ │ ├── [email protected]
  │ │ ├── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ ├─┬ [email protected]
  │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ │ ├── [email protected]
  │ │ │ │ │ │ └── [email protected]
  │ │ │ │ │ ├── [email protected]
  │ │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ │ └─┬ [email protected]
  │ │ │ │ │ │   └── [email protected]
  │ │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ │ └── [email protected]
  │ │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ │ └── [email protected]
  │ │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ │ └─┬ [email protected]
  │ │ │ │ │ │   └── [email protected]
  │ │ │ │ │ └─┬ [email protected]
  │ │ │ │ │   └─┬ [email protected]
  │ │ │ │ │     ├── [email protected]
  │ │ │ │ │     └── [email protected]
  │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ ├── [email protected]
  │ │ │ │ │ ├── [email protected]
  │ │ │ │ │ └─┬ [email protected]
  │ │ │ │ │   ├── [email protected]
  │ │ │ │ │   └─┬ [email protected]
  │ │ │ │ │     ├── [email protected]
  │ │ │ │ │     ├── [email protected]
  │ │ │ │ │     └── [email protected]
  │ │ │ │ ├── [email protected]
  │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ └─┬ [email protected]
  │ │ │ │ │   ├── [email protected]
  │ │ │ │ │   └── [email protected]
  │ │ │ │ ├─┬ [email protected]
  │ │ │ │ │ ├── [email protected]
  │ │ │ │ │ └── [email protected]
  │ │ │ │ └── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├─┬ [email protected]
  │ │ │ │ └─┬ [email protected]
  │ │ │ │   ├─┬ [email protected]
  │ │ │ │   │ └── [email protected]
  │ │ │ │   ├─┬ [email protected]
  │ │ │ │   │ └── [email protected]
  │ │ │ │   └── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├─┬ [email protected]
  │ │ │ │ ├── [email protected]
  │ │ │ │ ├── [email protected]
  │ │ │ │ ├── [email protected]
  │ │ │ │ ├── [email protected]
  │ │ │ │ └── [email protected]
  │ │ │ └── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ ├─┬ [email protected]
  │ │ │ │ └─┬ [email protected]
  │ │ │ │   ├── [email protected]
  │ │ │ │   └── [email protected]
  │ │ │ └─┬ [email protected]
  │ │ │   └── [email protected]
  │ │ ├── [email protected]
  │ │ └─┬ [email protected]
  │ │   └─┬ [email protected]
  │ │     └── [email protected]
  │ ├─┬ [email protected]
  │ │ ├── [email protected]
  │ │ └── [email protected]
  │ ├── [email protected]
  │ ├─┬ [email protected]
  │ │ └── [email protected]
  │ ├─┬ [email protected]
  │ │ └── [email protected]
  │ ├─┬ [email protected]
  │ │ └── [email protected]
  │ ├── [email protected]
  │ ├─┬ [email protected]
  │ │ ├── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ └─┬ [email protected]
  │ │ │   ├── [email protected]
  │ │ │   └── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ ├── [email protected]
  │ │ │ └── [email protected]
  │ │ └── [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ ├── [email protected]
  │ ├─┬ [email protected]
  │ │ └── [email protected]
  │ └── [email protected]
  ├─┬ [email protected]
  │ ├── [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ ├── [email protected]
  │ ├── [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ ├── [email protected]
  │ └── [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  ├─┬ [email protected]
  │ ├── [email protected]
  │ ├── [email protected]
  │ └─┬ [email protected]
  │   ├─┬ [email protected]
  │   │ ├── [email protected]
  │   │ └── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   ├── [email protected]
  │   └── [email protected]
  ├─┬ [email protected]
  │ └─┬ [email protected]
  │   └── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ ├─┬ [email protected]
  │ │ ├── [email protected]
  │ │ ├─┬ [email protected]
  │ │ │ └── [email protected]
  │ │ └── [email protected]
  │ ├── [email protected]
  │ └─┬ [email protected]
  │   └─┬ [email protected]
  │     └── [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  ├── [email protected]
  ├── [email protected]
  ├─┬ [email protected]
  │ └── [email protected]
  └─┬ [email protected]
    └─┬ [email protected]
      └── [email protected]

PM2 Pricing

PM2 appears to be for high-end apps but I am only using the free version or PM2 (thanks KeyMetrics)

pm2-pricing

Create a bucket for your node app

Login to keymetrics.io,

Click Generate New Bucket

Create New Bucket

Give the bucket a name etc.

Node Bucket Name

You can now link your bucket with your local pm2 installation (keep the keys private (this one no longer exists))

pm2-link

Linking your local pm2 installation with your keymetrics bucket

pm2 link l3brztzboz25him i6kofelsyfo7xrd
[KM] Connecting
[Monitoring Enabled] Dashboard access: https://app.keymetrics.io/#/r/i6kofelsyfo7xrd

To add an existing node app to PM2 type the following.

cd /your-node-application-path/
pm2 start yourapp.js -i 0 --name "myappname"

You can view node apps that pm2 is managing by typing

pm2 status

I had a two CPU VM and I found that the app I added was added to each of the two CPU (I only needed one) so I needed to delete the second app on my second core

pm2 delete 1

Restart the API

pm2 restart myappname

You can add a single node apps one 1, 3 or max available CPU’s

# Start the maximum processes depending on available CPUs
pm2 start app.js -i 0

# Start the maximum processes -1 depending on available CPUs
pm2 start app.js -i -1

# Start 3 processes
pm2 start app.js -i 3

Again, to add an existing node app to PM2 type the following.

cd /your-node-application-path/
pm2 start yourapp.js -i 0 --name "myappname"

Now you can view node app data online. If you don’t have a node app ready you can use the test app.

monitor output

You can monitor your node app locally too from the CLI.

local monitoring

You can also view a demo bucket at keymetrix.io

pm2-demo-bucket

PM2’s one age documentation can be found here.

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.0 Initial post

Filed Under: API, Automation, Cloud, Free, NGINX, NodeJS, Scalability, Server, Ubuntu, Vultr Tagged With: and, apps, background, Deploying, from, in, keymetrics.io, monitoring, NodeJS, the, with PM2

How to setup a twitter feed API endpoint in NodeJS with NGINX, Ruby and T etc

September 17, 2017 by Simon

This is how I set up a twitter feed API endpoint in NodeJS with NGINX, Ruby and T etc. This can be used to integrate SEO or automate Twitter post analytics or automate posts to twitter.

This guide assumes you have an Ubuntu server, nodejs (nginx) and setup ruby, rails and t twitter command line utility setup.  Also, you will need to have an API setup and working

Ensure your API is configured in NGINX /etc/nginx/sites-available/default

server {
...
	location /api/appname/v1 {
		#rewrite ^/api(.*) /$1 break;
		proxy_set_header X-Real-IP $remote_addr;	
		# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-For $remote_addr;
		proxy_http_version  1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection $connection_upgrade;
		proxy_set_header Proxy "";
		proxy_set_header Host $http_host;
		root /code/nodejs/appname;
		proxy_pass http://appnmaeapi;
	}
	...
}

I set up a placeholder GET routine in my node process that we will be used to communicate with the Twitter command line utility.

app.get('/api/appname/v1/feed/twitter/',function(req,res){
	var data = { "Feed":"" };
	data["Feed"] = "Working v0.1";
	res.json(data);
});

I started 1 instance of the API on a server with pm2 linked to https://keymetrics.io/.

sudo pm2 start /noejs/app01/app.js -i 1 --name "appname"

View API Status with PM2

sudo pm2 status
● Agent Online | Dashboard Access: https://app.keymetrics.io/#/r/######### | Server name:appname-######
┌────────────┬────┬─────────┬──────┬────────┬─────────┬────────┬─────┬───────────┬──────┬──────────┐
│ App name   │ id │ mode    │ pid  │ status │ restart │ uptime │ cpu │ mem       │ user │ watching │
├────────────┼────┼─────────┼──────┼────────┼─────────┼────────┼─────┼───────────┼──────┼──────────┤
│appname │ 0  │ cluster │ 5480 │ online │ 0       │ 87s    │ 0%  │ 35.1 MB   │ apiuser │ disabled │

You can restart the API manually with PM2 by typing

sudo pm2 restart "appname"

I like to use https://paw.cloud/ software to test API’s over say Postman, I can easily see the test placeholder API is working.Node API

The browser works for now but as soon as I add authentication the browser will fail, Paw FTW.

API Test

A simple way to authenticate the API is for the caller to pass a known token when requesting the page via an HTTP POST packet.

app.post('/api/appname/v1/feed/twitter/',function(req,res){
	var data = { "Status":"" };
	var wwxapi = req.headers['x-access-token'];
	var data = { "Feed":"" };
	var errData = "";
	if (wwxapi == undefined) {
		console.log(" 499 token required");
		errData = {"Status": "499", "Error": "valid token required"};
		res.statusCode = 499;
		res.send(errData);
		res.end();
	} else {
		var xpikeypass = false;
		// Check the API Key against known API Keys
		if (wwxapi == "appname-##############################") { xpikeypass = true;} // Debug API Key
		if (xpikeypass == true) {
			//  app request validated

			data["Feed"] = "Twitter Feed API Working v0.2";
			res.json(data);

		} else {
			console.log("499 token required");
			errData = {"Status": "499", "Error": "valid token required"};
			res.statusCode = 409;
			res.send(errData);	
			res.end();
		}
	}
});

Don’t forget to restart the API after changes

sudo pm2 restart "appname"

Now we can change the API endpoint to a POST (instead of GET) and paste the code above and test it.  If we pass an invalid “x-access-token” value the API will return error 409 as designed.

API Auth

Passing the right “x-access-token” will return data (200 OK).

API Working

Now we can link the Twitter command line utility with the NodeJS API.

app.post('/api/appname/v1/feed/twitter/',function(req,res){
	var data = { "Status":"" };
	var wwxapi = req.headers['x-access-token'];
	var data = { "Feed":"" };
	var errData = "";
	if (wwxapi == undefined) {
		console.log(" 499 token required");
		errData = {"Status": "499", "Error": "valid token required"};
		res.statusCode = 499;
		res.send(errData);
		res.end();
	} else {
		var xpikeypass = false;
		// Check the API Key against known API Keys
		if (wwxapi == "appname-##############################") { xpikeypass = true;} // Debug API Key
		if (xpikeypass == true) {
			//  app request validated

			var result = require('child_process').execSync('/theuserwithrubyinstalled/.rbenv/shims/t search all "lang:en nodejs" --csv').toString();
			data["Feed"] = result;

			res.json(data);

		} else {
			console.log("499 token required");
			errData = {"Status": "499", "Error": "valid token required"};
			res.statusCode = 409;
			res.send(errData);	
			res.end();
		}
	}
});

Now we can test the NodeJS API and have is spawn a subprocess and hit Twitter. In this case, we are finding the latest posts on Twitter with NodeJS in them.

Works a treat 🙂

Up next 

  • Schedule getting data from the Twitter command line utility in crontab and save to Redis
  • Process the CSV and save new Twitter changes to Redis
  • Query Redis and send returned data back to the API.
  • Change the API to read from Redis in NodeJ.S
  • Secure.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.2 Works

Filed Under: NGINX, NodeJS, Ruby, Twitter Tagged With: nginx, Node, query, ruby, T, twitter

Securing Ubuntu in the cloud

August 9, 2017 by Simon

It is easy to deploy servers to the cloud within a few minutes, you can have a cloud-based server that you (or others can use). ubuntu has a great guide on setting up basic security issues but what do you need to do.

If you do not secure your server expects it to be hacked into. Below are tips on securing your cloud server.

First, read more on scanning your server with Lynis security scan.

Always use up to date software

Always use update software, malicious users can detect what software you use with sites like shodan.io (or use port scan tools) and then look for weaknesses from well-published lists (e.g WordPress, Windows, MySQL, node, LifeRay, Oracle etc). People can even use Google to search for login pages or sites with passwords in HTML (yes that simple).  Once a system is identified by a malicious user they can send automated bots to break into your site (trying millions of passwords a day) or use tools to bypass existing defences (Security researcher Troy Hunt found out it’s child’s play).

Portscan sites like https://mxtoolbox.com/SuperTool.aspx?action=scan are good for knowing what you have exposed.

You can also use local programs like nmap to view open ports

Instal nmap

sudo apt-get install nmap

Find open ports

nmap -v -sT localhost

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-08 23:57 AEST
Initiating Connect Scan at 23:57
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 9101/tcp on 127.0.0.1
Discovered open port 9102/tcp on 127.0.0.1
Discovered open port 9103/tcp on 127.0.0.1
Completed Connect Scan at 23:57, 0.05s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00020s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
3306/tcp open  mysql
9101/tcp open  jetdirect
9102/tcp open  jetdirect
9103/tcp open  jetdirect

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Limit ssh connections

Read more here.

Use ufw to set limits on login attempts

sudo ufw limit ssh comment 'Rate limit hit for openssh server'

Only allow known IP’s access to your valuable ports

sudo ufw allow from 123.123.123.123/32 to any port 22

Delete unwanted firewall rules

sudo ufw status numbered
sudo ufw delete 8

Only allow known IP’s to certain ports

sudo ufw allow from 123.123.123.123 to any port 80/tcp

Also, set outgoing traffic to known active servers and ports

sudo ufw allow out from 123.123.123.123 to any port 22

Don’t use weak/common Diffie-Hellman key for SSL certificates, more information here.

openssl req -new -newkey rsa:4096 -nodes -keyout server.key -out server.csr
 
Generating a 4096 bit RSA private key
...

More info on generating SSL certs here and setting here and setting up Public Key Pinning here.

Intrusion Prevention Software

Do run fail2ban: Guide here https://www.linode.com/docs/security/using-fail2ban-for-security

I use iThemes Security to secure my WordPress and block repeat failed logins from certain IP addresses.

iThemes Security can even lock down your WordPress.

You can set iThemes to auto lock out users on x failed logins

Remember to use allowed whitelists though (it is so easy to lock yourself out of servers).

Passwords

Do have strong passwords and change the root password provided by the hosts. https://howsecureismypassword.net/ is a good site to see how strong your password is from brute force password attempts. https://www.grc.com/passwords.htm is a good site to obtain a strong password.  Do follow Troy Hunt’s blog and twitter account to keep up to date with security issues.

Configure a Firewall Basics

You should install a firewall on your Ubuntu and configure it and also configure a firewall with your hosts (e.g AWS, Vultr, Digital Ocean).

Configure a Firewall on AWS

My AWS server setup guide here. AWS allow you to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

Configure a Firewall on Digital Ocean

Configuring a firewall on Digital Ocean (create a $5/m server here).  You can configure your Digital Ocean droplet firewall by clicking Droplet, Networking then Manage Firewall after logging into Digital Ocean.

Configure a Firewall on Vultr

Configuring a firewall on Vultr (create a $2.5/m server here).

Don’t forget to set IP rules for IPV4 and IPV6, Only set the post you need to allow and ensure applications have strong passwords.

Ubuntu has a firewall built in (documentation).

sudo ufw status

Enable the firewall

sudo ufw enable

Adding common ports

sudo ufw allow ssh/tcp
sudo ufw logging on
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 53
sudo ufw allow 443
sudo ufw allow 873
sudo ufw enable
sudo ufw status
sudo ufw allow http
sudo ufw allow https

Add a whitelist for your IP (use http://icanhazip.com/ to get your IP) to ensure you won’t get kicked out of your server.

sudo ufw allow from 123.123.123.123/24 to any port 22

More help here.  Here is a  good guide on ufw commands. Info on port numbers here.

https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

If you don’t have a  Digital Ocean server for $5 a month click here and if a $2.5 a month Vultr server here.

Backups

rsync is a good way to copy files to another server or use Bacula

sudo apt install bacula

Basics

Initial server setup guide (Digital Ocean).

Sudo (admin user)

Read this guide on the Linux sudo command (the equivalent if run as administrator on Windows).

Users

List users on an Ubuntu OS (or compgen -u)

cut -d: -f1 /etc/passwd

Common output

cut -d: -f1 /etc/passwd
root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy
www-data
backup
list
irc
gnats
nobody
systemd-timesync
systemd-network
systemd-resolve
systemd-bus-proxy
syslog
_apt
lxd
messagebus
uuidd
dnsmasq
sshd
pollinate
ntp
mysql
clamav

Add User

sudo adduser new_username

e.g

sudo adduser bob
Adding user `bob' ...
Adding new group `bob' (1000) ...
Adding new user `bob' (1000) with group `bob' ...
Creating home directory `/home/bob' ...
etc..

Add user to a group

sudo usermod -a -G MyGroup bob

Show users in a group

getent group MyGroup | awk -F: '{print $4}'

This will show users in a group

Remove a user

sudo userdel username
sudo rm -r /home/username

Rename user

usermod -l new_username old_username

Change user password

sudo passwd username

Groups

Show all groups

compgen -ug

Common output

compgen -g
root
daemon
bin
sys
adm
tty
disk
lp
mail
proxy
sudo
www-data
backup
irc
etc

You can create your own groups but first, you must be aware of group ids

cat /etc/group

Then you can see your systems groups and ids.

Create a group

groupadd -g 999 MyGroup

Permissions

Read this https://help.ubuntu.com/community/FilePermissions

How to list users on Ubuntu.

Read more on setting permissions here.

Chmod help can be found here.

Install Fail2Ban

I used this guide on installing Fail2Ban.

apt-get install fail2ban

Check Fail2Ban often and add blocks to the firewall of known bad IPs

fail2ban-client status

Best practices

Ubuntu has a guide on basic security setup here.

Startup Processes

It is a good idea to review startup processes from time to time.

sudo apt-get install rcconf
sudo rcconf

Accounts

  • Read up on the concept of least privilege access for apps and services here.
  • Read up on chmod permissions.

Updates

Do update your operating system often.

sudo apt-get update
sudo apt-get upgrade

Minimal software

Only install what software you need

Exploits and Keeping up to date

Do keep up to date with exploits and vulnerabilities

  • Follow 0xDUDE on twitter.
  • Read the GDI.Foundation page.
  • Visit the Exploit Database
  • Vulnerability & Exploit Database
  • Subscribe to the Security Now podcast.

Secure your applications

  • NodeJS: Enable logging in applications you install or develop.

Ban repeat Login attempts with FailBan

Fail2Ban config

sudo nano /etc/fail2ban/jail.conf
[sshd]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3

Hosts File Hardening

sudo nano /etc/host.conf

Add

order bind,hosts
nospoof on

Add a whitelist with your ip on /etc/fail2ban/jail.conf (see this)

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not                          
# ban a host which matches an address in this list. Several addresses can be                             
# defined using space separator.
                                                                         
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8

Restart the service

sudo service fail2ban restart
sudo service fail2ban status

Intrusion detection (logging) systems

Tripwire will not block or prevent intrusions but it will log and give you a heads up with risks and things of concern

Install Tripwire.

sudo apt-get install tiger tripwire

Running Tripwire

sudo tiger

This will scan your system for issues of note

sudo tiger
Tiger UN*X security checking system
   Developed by Texas A&M University, 1994
   Updated by the Advanced Research Corporation, 1999-2002
   Further updated by Javier Fernandez-Sanguino, 2001-2015
   Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
   Covered by the GNU General Public License (GPL)

Configuring...

Will try to check using config for 'x86_64' running Linux 4.4.0-89-generic...
--CONFIG-- [con005c] Using configuration files for Linux 4.4.0-89-generic. Using
           configuration files for generic Linux 4.
Tiger security scripts *** 3.2.3, 2008.09.10.09.30 ***
20:42> Beginning security report for simon.
20:42> Starting file systems scans in background...
20:42> Checking password files...
20:42> Checking group files...
20:42> Checking user accounts...
20:42> Checking .rhosts files...
20:42> Checking .netrc files...
20:42> Checking ttytab, securetty, and login configuration files...
20:42> Checking PATH settings...
20:42> Checking anonymous ftp setup...
20:42> Checking mail aliases...
20:42> Checking cron entries...
20:42> Checking 'services' configuration...
20:42> Checking NFS export entries...
20:42> Checking permissions and ownership of system files...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Checking for indications of break-in...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Performing rootkit checks...
20:42> Performing system specific checks...
20:46> Performing root directory checks...
20:46> Checking for secure backup devices...
20:46> Checking for the presence of log files...
20:46> Checking for the setting of user's umask...
20:46> Checking for listening processes...
20:46> Checking SSHD's configuration...
20:46> Checking the printers control file...
20:46> Checking ftpusers configuration...
20:46> Checking NTP configuration...
20:46> Waiting for filesystems scans to complete...
20:46> Filesystems scans completed...
20:46> Performing check of embedded pathnames...
20:47> Security report completed for simon.
Security report is in `/var/log/tiger/security.report.simon.170809-20:42'.

My Output.

sudo nano /var/log/tiger/security.report.username.170809-18:42

Security scripts *** 3.2.3, 2008.09.10.09.30 ***
Wed Aug  9 18:42:24 AEST 2017
20:42> Beginning security report for username (x86_64 Linux 4.4.0-89-generic).

# Performing check of passwd files...
# Checking entries from /etc/passwd.
--WARN-- [pass014w] Login (bob) is disabled, but has a valid shell.
--WARN-- [pass014w] Login (root) is disabled, but has a valid shell.
--WARN-- [pass015w] Login ID sync does not have a valid shell (/bin/sync).
--WARN-- [pass012w] Home directory /nonexistent exists multiple times (3) in
         /etc/passwd.
--WARN-- [pass012w] Home directory /run/systemd exists multiple times (2) in
         /etc/passwd.
--WARN-- [pass006w] Integrity of password files questionable (/usr/sbin/pwck
         -r).

# Performing check of group files...

# Performing check of user accounts...
# Checking accounts from /etc/passwd.
--WARN-- [acc021w] Login ID dnsmasq appears to be a dormant account.
--WARN-- [acc022w] Login ID nobody home directory (/nonexistent) is not
         accessible.

# Performing check of /etc/hosts.equiv and .rhosts files...

# Checking accounts from /etc/passwd...

# Performing check of .netrc files...

# Checking accounts from /etc/passwd...

# Performing common access checks for root (in /etc/default/login, /securetty, and /etc/ttytab...
--WARN-- [root001w] Remote root login allowed in /etc/ssh/sshd_config

# Performing check of PATH components...
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
# Only checking user 'root'

# Performing check of anonymous FTP...

# Performing checks of mail aliases...
# Checking aliases from /etc/aliases.

# Performing check of `cron' entries...
--WARN-- [cron005w] Use of cron is not restricted

# Performing check of 'services' ...
# Checking services from /etc/services.
--WARN-- [inet003w] The port for service ssmtp is also assigned to service
         urd.
--WARN-- [inet003w] The port for service pipe-server is also assigned to
         service search.

# Performing NFS exports check...

# Performing check of system file permissions...
--ALERT-- [perm023a] /bin/su is setuid to `root'.
--ALERT-- [perm023a] /usr/bin/at is setuid to `daemon'.
--ALERT-- [perm024a] /usr/bin/at is setgid to `daemon'.
--WARN-- [perm001w] The owner of /usr/bin/at should be root (owned by daemon).
--WARN-- [perm002w] The group owner of /usr/bin/at should be root.
--ALERT-- [perm023a] /usr/bin/passwd is setuid to `root'.
--ALERT-- [perm024a] /usr/bin/wall is setgid to `tty'.

# Checking for known intrusion signs...
# Testing for promiscuous interfaces with /bin/ip
# Testing for backdoors in inetd.conf

# Performing check of files in system mail spool...

# Performing check for rookits...
# Running chkrootkit (/usr/sbin/chkrootkit) to perform further checks...
--WARN-- [rootkit004w] Chkrootkit has detected a possible rootkit installation
Possible Linux/Ebury - Operation Windigo installetd

# Performing system specific checks...
# Performing checks for Linux/4...

# Checking boot loader file permissions...
--WARN-- [boot02] The configuration file /boot/grub/menu.lst has group
         permissions. Should be 0600
--FAIL-- [boot02] The configuration file /boot/grub/menu.lst has world
         permissions. Should be 0600
--WARN-- [boot06] The Grub bootloader does not have a password configured.

# Checking for vulnerabilities in inittab configuration...

# Checking for correct umask settings for init scripts...
--WARN-- [misc021w] There are no umask entries in /etc/init.d/rcS

# Checking Logins not used on the system ...

# Checking network configuration
--FAIL-- [lin013f] The system is not protected against Syn flooding attacks
--WARN-- [lin017w] The system is not configured to log suspicious (martian)
         packets

# Verifying system specific password checks...

# Checking OS release...
--WARN-- [osv004w] Unreleased Debian GNU/Linux version `stretch/sid'

# Checking installed packages vs Debian Security Advisories...

# Checking md5sums of installed files

# Checking installed files against packages...
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/udev/hwdb.bin' does not belong to any package.

# Performing check of root directory...

# Checking device permissions...
--WARN-- [dev003w] The directory /dev/block resides in a device directory.
--WARN-- [dev003w] The directory /dev/char resides in a device directory.
--WARN-- [dev003w] The directory /dev/cpu resides in a device directory.
--FAIL-- [dev002f] /dev/fuse has world permissions
--WARN-- [dev003w] The directory /dev/hugepages resides in a device directory.
--FAIL-- [dev002f] /dev/kmsg has world permissions
--WARN-- [dev003w] The directory /dev/lightnvm resides in a device directory.
--WARN-- [dev003w] The directory /dev/mqueue resides in a device directory.
--FAIL-- [dev002f] /dev/rfkill has world permissions
--WARN-- [dev003w] The directory /dev/vfio resides in a device directory.

# Checking for existence of log files...
--FAIL-- [logf005f] Log file /var/log/btmp permission should be 660
--FAIL-- [logf007f] Log file /var/log/messages does not exist

# Checking for correct umask settings for user login shells...
--WARN-- [misc021w] There is no umask definition for the dash shell
--WARN-- [misc021w] There is no umask definition for the bash shell

# Checking symbolic links...

# Performing check of embedded pathnames...
20:47> Security report completed for username.

More on Tripwire here.

Hardening PHP

Hardening PHP config (and backing the PHP config it up), first create an info.php file in your website root folder with this info

<?php
phpinfo()
?>

Now look for what PHP file is loadingPHP Config

Back that your PHP config file

TIP: Delete the file with phpinfo() in it as it is a security risk to leave it there.

TIP: Read the OWASP cheat sheet on using PHP securely here and securing php.ini here.

Some common security changes

file_uploads = On
expose_php = Off
error_reporting = E_ALL
display_errors          = Off
display_startup_errors  = Off
log_errors              = On
error_log = /php_errors.log
ignore_repeated_errors  = Off

Don’t forget to review logs, more config changes here.

Antivirus

Yes, it is a good idea to run antivirus in Ubuntu, here is a good list of antivirus software

I am installing ClamAV as it can be installed on the command line and is open source.

sudo apt-get install clamav

ClamAV help here.

Scan a folder

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Setup auto-update antivirus definitions

sudo dpkg-reconfigure clamav-freshclam

I set auto updates 24 times a day (every hour) via daemon updates.

tip: Download manual antivirus update definitions. If you only have a 512MB server your update may fail and you may want to stop fresh claim/php/nginx and mysql before you update to ensure the antivirus definitions update. You can move this to a con job and set this to update at set times over daemon to ensure updates happen.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v
Current working dir is /var/lib/clamav
Max retries == 5
ClamAV update process started at Tue Aug  8 22:22:02 2017
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1152
Software version from DNS: 0.99.2
Retrieving http://db.au.clamav.net/main.cvd
Trying to download http://db.au.clamav.net/main.cvd (IP: 193.1.193.64)
Downloading main.cvd [100%]
Loading signatures from main.cvd
Properly loaded 4566249 signatures from new main.cvd
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Querying main.58.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/daily.cvd
Trying to download http://db.au.clamav.net/daily.cvd (IP: 193.1.193.64)
Downloading daily.cvd [100%]
Loading signatures from daily.cvd
Properly loaded 1742284 signatures from new daily.cvd
daily.cvd updated (version: 23644, sigs: 1742284, f-level: 63, builder: neo)
Querying daily.23644.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/bytecode.cvd
Trying to download http://db.au.clamav.net/bytecode.cvd (IP: 193.1.193.64)
Downloading bytecode.cvd [100%]
Loading signatures from bytecode.cvd
Properly loaded 66 signatures from new bytecode.cvd
bytecode.cvd updated (version: 308, sigs: 66, f-level: 63, builder: anvilleg)
Querying bytecode.308.82.1.0.C101C140.ping.clamav.net
Database updated (6308599 signatures) from db.au.clamav.net (IP: 193.1.193.64)

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart 

sudo /etc/init.d/clamav-freshclam start

Manual scan with a bash script

Create a bash script

mkdir /script
sudo nano /scripts/updateandscanav.sh

# Include contents below.
# Save and quit

chmod +X /scripts/updateandscanav.sh

Bash script contents to update antivirus definitions.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart

sudo /etc/init.d/clamav-freshclam start

sudo clamscan --max-filesize=3999M --max-scansize=3999M -v -r /

Edit the crontab to run the script every hour

crontab -e
1 * * * * /bin/bash /scripts/updateandscanav.sh > /dev/null 2>&1

Uninstalling Clam AV

You may need to uninstall Clamav if you don’t have a lot of memory or find updates are too big.

sudo apt-get remove --auto-remove clamav
sudo apt-get purge --auto-remove clamav

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Other

  • Read this awesome guide.
  • install Fail2Ban
  • Do check your log files if you suspect suspicious activity.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.92 added hardening a linux server link

Filed Under: Ads, Advice, Analitics, Analytics, Android, API, App, Apple, Atlassian, AWS, Backup, BitBucket, Blog, Business, Cache, Cloud, Community, Computer, CoronaLabs, Cost, CPI, DB, Development, Digital Ocean, DNS, Domain, Email, Feedback, Firewall, Free, Git, GitHub, GUI, Hosting, Investor, IoT, JIRA, LetsEncrypt, Linux, Malware, Marketing, mobile app, Monatization, Monetization, MongoDB, MySQL, Networking, NGINX, NodeJS, NoSQL, OS, Planning, Project, Project Management, Psychology, push notifications, Raspberry Pi, Redis, Route53, Ruby, Scalability, Scalable, Security, SEO, Server, Share, Software, ssl, Status, Strength, Tech Advice, Terminal, Transfer, Trello, Twitter, Ubuntu, Uncategorized, Video Editing, VLOG, VM, Vultr, Weakness, Web Design, Website, Wordpress Tagged With: antivirus, brute force, Firewall

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

How to develop software ideas

July 9, 2017 by Simon

I was recently at a public talk by Alan Jones at the UNE Smart Region Incubator where Alan talked about launching startups and developing ideas.

Alan put it quite eloquently that “With change comes opportunity” and we are all very capable of building the next best thing as technological barriers and costs are a lot lower than 5 years ago but Alan also mentioned 19 start-ups-ups fail but “if you focus on solving customer problems you have a better chance of succeeding”. Regions need to share knowledge and you can learn from other peoples mistakes.”

I was asked after this event to share thoughts on “how do I learn to develop an app” and “how do you get the knowledge”. Here is my poor “brain dump” on how to develop software ideas (It’s hard to condense 30 years experience developing software). I will revise this post over the coming weeks so check back often.

If you have never programmed before check out this programming 101 guides here.

I have blogged on technology/knowledge things in the past at www.fearby.com and recently I blogged about how to develop cloud-based services (here, here, here, here and here) but this blog post assumes you have a validated “app idea” and you want to know how to develop yourself. If you do not want to develop an app yourself you may want to speak with Blue Chilli.

Find a good mentor.


True App Development Quotes

  • Finding development information is easy, following a plan is hard.
  • Aim for progress and not perfection.
  • Learn one thing at a time (Multitasking can kill your brain).
  • Fail fast and fail early and get feedback as early as possible from customers.
  • 10 engaged customers are better than 10,000 disengaged users.

And a bit of humour before we start.

Project Mangement Lol

(click for larger image)

Here is a funny video on startup/entrepreneur life/lingo


This is a good funny, open and honest video about programming on YouTube.

Follow Seth F Samuel on twitter here.

Don’t be afraid to learn from others before you develop

My fav tips from over 200 failed startups (from https://www.cbinsights.com/blog/startup-failure-post-mortem/ )

  • Simpler websites shouldn’t take more than 2-3 months.You can always iterate and extrapolate later. Wet your feet asap
  • As products became more and more complex, the performance degrades. Speed is a feature for all web apps. You can spend hundreds of hours trying to speed of the app with little success. Benchmarking tools incorporated into the development cycle from the beginning is a good idea
  • Outsource or buy in talent if you don’t know something (e.g marketing). Time is money.
  • Make an environment where you will be productive. Working from home can be convenient, but often times will be much less productive than a separate space. Also it’s a good idea to have separate spaces so you’ll have some work/life balance.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • We most definitely committed the all-too-common sin of premature scaling. Driven by the desire to hit significant numbers to prove the road for future fundraising and encouraged by our great initial traction in the student market, we embarked on significant work developing paid marketing channels and distribution channels that we could use to demonstrate scalable customer acquisition. This all fell flat due to our lack of product/market fit in the new markets, distracted significantly from product work to fix the fit (double fail) and cost a whole bunch of our runway.
  • If you’re bootstrapping, cash flow is king. If you want to possibly build a product while your revenue is coming from other sources, you have to get those sources stable before you can focus on the product.
  • Don’t multiply big numbers. Multiply $30 times 1.000 clients times 24 months. WOW, we will be rich! Oh, silly you, you have no idea how hard it is to get 1.000 clients paying anything monthly for 24 months. Here is my advice: get your first client. Then get your first 10. Then get more and more. Until you have your first 10 clients, you have proved nothing, only that you can multiply numbers.
  • Customers pay for information, not raw data. Customers are willing to pay a lot more for information and most are not interested in data. Your service should make your customers look intelligent in front of their stakeholders. Follow up with inactive users. This is especially true when your service does not give intermediate values to your users. Our system should have been smarter about checking up on our users at various stages.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.

Here are my tips on staying on track developing apps. What is the difference between a website, app, API, web app, hybrid app and software (my blog post here)?

I have seen quite a few projects fail because:

  • The wrong technology was mandated.
  • The software was not documented (by the developers).
  • The software was shelved because new developers hated it or did not want to support it.

Project Roles (hats)

It is important to understand the roles in a project (project management methodology aside) and know when you are being a “decision maker” or a “technical developer”. A project usually has these roles.

  • Sponsor/owner (usually fund the project and have the final say).
  • Executive/Team leader/scrum master (manage day to day operations, people, tasks and resources).
  • Team members (UI, UX, Marketers, Developers (DevOps, Web, Design etc) are usually the doers.
  • Stakeholders (people who are impacted (operations, owners, Helpdesk)).
  • Subject Matter Experts (people who should guide the work and not be ignored).
  • Testers (people who test the product and give feedback).

It can be hard as a developer to switch hats in a one-person team.

How do you develop and gain knowledge?

First, document what you need to develop (what problem are you solving and what value will your idea bring). Does this solution exist already? Don’t solve a problem that already exists.

Developing software is not hard, you just need to be logical, research, be patient and follow a plan. The hardest part can be gluing components together.

I like to think of developing software like making a car if you need 4 wheels do you have 4 wheels? If you want to build it yourself and save some money can you make wheels (make rubber strips with steel reinforced/vulcanized rubber, make alloys and add bearings and have them pass regulations) or should you buy wheels (some things are cheaper to make than other things)? Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks.  Developing software can lead you down a rabbit hole of endless research, development, and testing if you don’t know what you are doing.

Examples 1:

I “need a webpage”:

  • Research: Will Wix, Shopify or a hosted WordPress website do (is it flexible or cheap enough) or do I install WordPress (guide here) or do I  learn and build an HTML website and buy a theme and modify it (and have a custom/flexible solution)?

Example 2:

I “need an iPhone and Android app”:

Research: You will need to learn iOS and Android programming and you may need a server or two to hold the apps data, webpage and API. You will also need to set up and secure the servers or choose to install a database or go with a “database as a service” like cloud.mongodb.com or google firebase.

Money can buy anything (but will it be flexible/cheap enough), time can build anything (but will it be secure enough).

Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks but developing software can lead you down a rabbit hole of endless research, development and testing if you don’t know what you are doing.

Almost all systems will need a central database to store all data, you can choose a traditional relational SQL database or a newer NoSQL database. MySQL is a good/cheap relational SQL database and MongoDB is a good NoSQL database. You will need to decide on how your app talks to the database (directly or via an API (protected by OAuth or limited access tokens)).  It is a bad idea to open a database directly to the world with no security. Sites like www.shodan.io will automatically scan the Internet looking for open databases or systems and report this as an insecure site to anyone. It is in your interest to develop secure systems in all stages of development.

CRUD (Create, Read, Update and Delete) is a common group of database tasks that you can do to prove you can read, write, update and delete from a database. While performing CRUD operations is a good to benchmark to also see how fast the database it.  if a database is the slowest link then you can use memory to cache database values (read my guide here). Caching can turn a cheap server into a faster server. Learning by doing can quickly build skills so “research”, “do” and “learn”.

Most solutions will need a website (and a web server). Here is a good article comparing Apache and Nginx (the leading open source web servers).

Stacks and Technology – There are loads of development environments (stacks), frameworks and technologies that you can choose. Frameworks supposedly make things easier and faster but frameworks and technologies change (See 2016 frameworks to learn guide and 2017 frameworks to learn guide) frequently (and can be abandoned). Frameworks supposedly make things easier and faster but be careful most frameworks run 30% slower than raw server-side and client code. I’d recommend you learn a few technologies like NGINX, NodeJS, PHP and MySQL and move up from there.

The Mean Stack is a  popular web development platform (MEAN = MongoDB, ExpressJS, Angular and NodeJS.).

Apps can be developed for Apple platforms by signing up here (about $150 AUD a year) and using the XCode IDE. Apps can be developed for the Android Platform by using Android Studio (for about $20 (one-off fee)). Microsoft has a developer portal for the Windows Platform. Google also has an online scalable database as a service called Firebase. If you look hard enough you will find a service for everything but connecting those services can be timely, costly or make security and a scalable solution impossible so beware of using as-a-service platforms. I used the Corona SDK to develop an app but abandoned the platform due to changes in the vendor’s communication and enforced policies.

If you are not sure don’t be afraid of ask for help on Twitter.

Twitter is awesome for finding experts

Recent twitter replies to a problem I had.

Learning about new Technology and Stacks

To build the knowledge you need to learn stuff, build stuff, test (benchmark), get feedback and build more stuff. I like to learn about new technology and stacks by watching Udemy courses and they have a huge list of development courses (Web Development, Mobile Apps, Programming Languages, Game Development, Databases,  Software Testing,  Software Engineering etc).

I am currently watching a Practical iOS 11 course by Stephen DeStefano on Udemy to learn about unreleased/upcoming features on the Apple iPhone (learning about XCode 9, Swift 4, What’s new in iOS 11, Drag and drop, PDF and ARKit etc).

Udemy is awesome (Udemy often have courses for $15).

If you want to learn HTML go to https://www.w3schools.com/.

https://devslopes.com/have a number or development related courses and an active community of developers in a chat system.

You can also do formal study via an education provider (e.g. Bachelor of computer sciences at UNE or Certificate IV in programming or Diploma in Software Development at TAFE).

I would recommend you use Twitter and follow keywords (hashtags) around key topics (e.g #www, #css, #sql, #nosql, #nginx, #mongodb, #ios, #apple, #android, #swift, #objectivec, #java, #kotlin) and identify users to follow. Twitter is great for picking up new information.

I follow the following developers on YouTube (TheSwiftGuy, AppleProgrammer, AwesomeTuts, LetsBuildThatApp, CodingTech etc)

Companies like https://www.civo.com/ offer developer-friendly features with hosting, https://www.pebbled.io/ offer to develop for you and https://serverpilot.io/ help you spin up software on hosting providers.

What To Develop

First, you need to break down what you need. (e.g ” I want an app for iOS and Android in 5 months that does XYZ. The app must be secure and be fast. Users must be able to register an account and update their profile”).

Choosing how high to ensure your development project scales depends on your peak expected/active concurrent users (ratio of paying and free users). You can develop your app to scale very high but this may cost more money initially, it can be bad to pay to ensure scalability early. As long as you have a good product and robust networking/retry routines and UI you don’t need to scale high early.

Once you know what you need you can search the open-source community for code that you can use. I use Alamofire for iOS network requests, SwiftyJSON for processing JSON data and other open-source software. The only downside of using open source software is it may be abandoned by the creators and break in the future. Saving your time early may cost you time later.

Then you can break down what you don’t want. (e.g “I don’t want a web app or a windows phone or windows desktop app”). From here you will have a list of what you need and what you can avoid.

You will also need to choose a project management methodology (I have blogged about this here). Having a list of action item’s and a plan and you can work through developing your app.

While you are researching it is a good idea to develop smaller fun projects to refine your skills.  There are a number of System Development Life Cycles (SDLC’s) but don’t worry if you get stuck, seek advice or move on. It is a  good idea to get users beta testing your app early and seek feedback. Apple has the TestFlight app where you can send beta versions of apps to best testers. Here is a good guide on Android beta testing.

If you are unsure about certain user interface options or features divide your beta testers and perform A/B or split testing to determine the most popular user interfaces. Capturing user data and logs can also help with debugging and user usage actions.

Practice

Develop smaller proof of concept apps in new technologies or frameworks and you will build your knowledge and uncover limitations in certain frameworks and how to move forward with confidence. It is advisable to save your source code for later use and to share with others.

I have shared quite a bit of code at https://simon.fearby.com/blog/ that I refer to from time to time. I should have shared this on GitHub but I know Google will find this if people want it.

Get as much feedback as you can on what you do and choose (don’t trust the first blog post you read (me included)).

Most companies offer Webinars on their products. I like the NGINX webinars. Tutorialspoint have courses on development topics. Sitepoint is a  good development site that offers free books, courses, and articles. What are API’s information by Programmable web.

You may want to document your application flow to better understand how the user interface works.

Useful Tools

Balsamic Mockups and Blueprint are handy for mocking up applications.

C9.io is a great web-based IDE that can connect to a VM on AWS or Digital Ocean.  I have a guide here on connecting Cloud 9 to an AWS VM here.

I use the Sublime Text 3 text editor when editing websites locally.

(image courtesy of https://www.sublimetext.com/ )

I use the Mac Paw app to help test API’s I develop locally.

(image courtesy of https://paw.cloud )

Snippets is a great application for the Mac for storing code snippets.

I use the Cornerstone Subversion app for backing up my code on my Mac.

Webservers: https://www.iis.net/IIS Webserver, NGINX Webserver, Apache Webserver.

NodeJS programming manual and tutorials.

I use Little Snitch (guide here) for simulating network down in app development.

I use the Forklift file manager on OSX.

Databases: SQL tutorials, NoSQL Tutorials, MySQL documentation.

Siege is a command-line HTTP load testing tool.

CPU Busy

http://loader.io/ is a nice web-based benchmarking tool.

Bootstrap is an essential mobile responsive framework.

Atlassian Jira is an essential project tracking tool. More on Agile Epics v Stories v Tasks on the Atlassian community website here. I have a post on developing software and staying on track here using Jira.

Jsfiddle is a good site that allows you to share code you are working on or having trouble with.

Dribbble is a “show and tell” site for designers and creatives.

Stackoverflow is the go-to place to ask for help.

Things I care about during development phases.

  • Scalability
  • Flexibility
  • Risk
  • Cost
  • Speed

Concentrating too much on one facet can risk exposing other facets. Good programmers can recommend a deliver a solution that can be strong in all areas ( I hate developing apps that are slow but secure or scalable and complex).

Platforms

You can signup for online servers like Azure, AWS (my guide here) or you can use a cheaper CPanel based hosting. Read my guide on the costs of running a cloud-based service.

Use my link to get a free Digital Ocean server for two months by using this link. Read my blog post here to help setup you VM. You can always use Ubuntu on your local machine to use Ubuntu (read my guide here). Don’t forget to use a GIT code repository like GitHub or Bitbucket.

Locally you can install Ubuntu (developers edition) and have a similar environment as cloud platforms.

Lessons Learned

  • Deploy servers close to the customers (Digital Ocean is too far away to scale in Australia).
  • Accessibility and testing (make things accessible from the start).
  • Backup regularly (Use GIT, backup your server and use Rsync to copy files to remote servers and use services like backblaze.com to backup your machine).
  • Transportability of technology (Use open technology and don’t lock yours into one platform or service).
  • Cost (expensive and convenient solutions may be costly).
  • Buy in themes and solutions (wrapbootstrap.com).
  • Do improve what you have done (make things better over time). Thing progress and not perfection.

There is no shortage of online comments bagging certain frameworks or platforms so look for trends and success stories and don’t go with the first framework you find. Try candidate frameworks and services and make up your own mind.

A good plan, violently executed now, is better than a perfect plan next week. – General George S. Patton

Costs

Sometimes cost is not the deciding factor (read my blog post on Alibaba cloud). You should estimate your apps costs per 1000 users. What do light v heavy users cost you? I have a blog post on the approx cost of cloud services.  I started researching a scalable NoSQL platform on IBM Cloudant and it was going to cost $4,000 USD a month and integrating my own App logic and security was hard. I ended up testing MongoDB Cloud where I can scale to three servers for $80 a month but for now, I am developing my current project on my own AWS server with MongoDB instance. Read my blog post here on setting up MongoDB and read my blog post on the best MongoDB GUI.

Here is a great infographic for viewing what’s involved in mobile app development.

You can choose a number of tools or technologies to achieve your goals, for me it is doing it economically, securely and in a scalable way that has predictable costs. It is quite easy to develop something that is costly, won’t scale or not secure or flexible. Don’t get locked into expensive technologies. For example, AWS has a user pays Node JS service called Lambada where you get Million of free hits a month and then you get charged $0.0000002 per request thereafter. This sounds good but I prefer fixed pricing/DIY servers better as it allows me to build my own logic into apps (this is more important than scalability).

Using open-source software of off the shelf solutions may speed things up initially? Will It slow you down later though? Ensure free solutions are complete and supported and Ensure frameworks are helping. Do you need one server or multiple servers (guide on setting up a distributed MySQL environment )? You can read about my scalability on a budget journey here. You can speed up a server in two ways Scale Up (Add more Mhz or CPU cores) or scale-out (add more servers).

Start small and use free frameworks and platforms but have a tested scale-up plan, I researched cheap Digital Ocean servers and moved to AWS to improve latency and tested MongoDB on Digital Ocean and AWS but have a plan to scale up to cloud.mongodb.com if need be.

Outsource (contractors) 

Remember outsourcing work tasks (or complete outsourcing of development) can buy you time and or deliver software faster. Outsourcing can also introduce risks and be expensive. Ask for examples of previous work and get raw numbers on costs (now and in the future) and concurrent users that a particular bit of outsourcing work will achieve.

If you are looking to outsource work do look at work that the person or company has done before (if is fast, compliant, mobile scalable, secure, robust, backup up, do you have rights to edit/own and own the IP etc). I’d be cautious of companies who say they can do everything and don’t show live demos.

Also, beware of restrictions on your code set by the contractors. Can they do everything you need (compare with your list of Moscow must haves)? Sometimes contractors only code or do what they are comfortable with that can impact your deliverables.

Do use a private Git repository (that you own) like GitHub or BitBucket to secure your code and use software like Trello or Atlassian JIRA to track your project. Insist the contractors use your repository to retain control.

You can always sell equity in your idea to an investor and get feedback/development from companies like Bluechilli.

Monetization and data

Do have multiple monetization streams (initial app purchase cost, in-app purchase, subscriptions, in-app credit, advertising, selling code/components etc). Monthly revenue over yearly subscription works best to ensure cash flow.

Capture usage data and determine trends around successful engagement, Improve what works. Use A/B testing to roll out new features.

I like Backblaze post on getting your first 1,000 customers.

Maintenance, support risk and benefits

Building your own service can be cheaper but also riskier if you fail to secure an app you are in trouble if you cannot scale you are in trouble. If you don’t update your server when vulnerabilities come out you are in trouble. Also, Google on monetization strategies. Apple apps do appear to deliver more profits over Android. Developers often joke “Apple devices offer 90% of the profits and 10% of the problems and Android apps offer 90% of the problems and 10% of the profits”.

Also, Apple users tend to update to the latest operating system sooner where Android devices are rather fragmented.

Do inform you users with self-service status pages and informative error messages and don’t annoy users.

Use Free Trials and Credit

Most vendors have free trials so use them

https://aws.amazon.com/free/AWS have 12 month free tiers.

Use this link to get two months free with Digital Ocean.

Microsoft Azure also give away free credit.

Google cloud also have free credit.

Don’t be afraid to ask.

MongoDB Cloud also gives away free credit if you ask.

Security

Sites like Shodan.io will quickly reveal weaknesses in your server (and services), this will help you build robust solutions from the start before hackers find them. Read https://www.owasp.org/index.php/Main_Page to know h0w to develop secure websites. Listen to the SecurityNow podcast to learn how the technology works and is broken. Following TroyHunt is recommended to keep up to date with security in general. @0xDUDE is a good ethical hacker to follow to stay up-to date on security exploits also @GDI_FDN is a good non-profit organization that helps defend sites that use open source software.

White hack hackers exist but so do black hat ones.

Read the Open Web Application Security site here. Read my guide on setting up public key pinning in security certificates here.

I use the ASafaWeb site to test your sites from common ASP security flaws. If you have a secure certificate on your site you will need to ensure the certificate is secure and up to date with the SSL Labs SSL Test site.

SSL Cert

Once your websites IP address is known (get it from SSL Labs) run a scan over your site with https://www.shodan.io/ to find open ports or security weaknesses.

Shodan.io allows you and others to see public information about your server and services. You can read about well-known internet ports here.

Anyone can find your server if you are running older (or current) web servers and or services.

It is a  good idea to follow security researchers like Steve Gibson and Troy Hunt and stay up to date with live exploits. http://blog.talosintelligence.com is also a good site for reading technical breakdowns of exploits.

Networking

Do share and talk about what you do with other developers. You can learn a lot from other developers and this can save you loads of time and mistakes. True developers love talking about their code and solutions.

Decision Making

Quite a lot of time can be spent on deciding on what technology or platform to use, I decide by factoring in cost, risk and security over flexibility, support and scalability. If I need flexibility, lower support or scalability then I’ll choose a different technology/platform. Generally, technology can help with support. Scalable solutions need effort from start to finish (it is quite easy to slow down any technology or service).

Don’t be afraid to admit you have chosen the wrong technology or platform. It is far easier to research and move on than live with poor technology.

If you have chosen the wrong technology and stick with it, you (and others) will loath working with it (impacting productivity/velocity).  Do you spend time swapping technology or platforms now or be less productive later?

Intellectual property and Trademarks

Ensure you search international trademarks for your app terms before you start using them. The Australian ATO has a good Australian business name checker here.

https://namechk.com/ is also a good place to search for your app ideas name before you buy or register any social media accounts.

Using https://namechk.com/ you can see “mystartupidea” name is mostly free.

And the name “microsoft’ is mostly taken.

Seek advice from a start-up experts from https://www.bluechilli.com/ like Alan Jones.

See my guide on how to get useful feedback for your ideas here.

Tips

  1. Use Git Source Control systems like GitHub or Bitbucket from the start and offsite backup your server and environments frequently. Digital Ocean charges 20% of your servers costs to back it up. AWS has multiple backup offerings.
  2. Start small and scale up when needed.
  3. Do lots of research and test different platforms, frameworks, and technologies and you will know what you should choose to develop with.

(Image above found at http://startupquotes.startupvitamins.com/ Follow Startup Vitamins on Twitter here.).

You will know when you are a developer when you have gained knowledge and experience and can automatically avoid technologies that will not fit a  solution.

Share

Don’t be afraid to share what you know (read my blog post on this here). Sharing allows you to solidify your knowledge and get new information. Shane Bishop from EWWW Image Optimizer  WordPress plugin wrote Setting up a fast distributed MySQL environment with SSL for us. If you have something to share on here please let me know here on twitter.

It’s never too late to do

One final tip is knowledge is not everything, planning and research is key, a mind that can’t develop may be better than a mind that can because they have no experience (or baggage) and may find faster ways to do things. Thanks to http://zachvo.com/ for teaching me this during a recent WordPress re-deployment. Sometimes the simplest solution is.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

DRAFT: 1.86 added short link

Short: https://fearby.com/go2/develop/

Filed Under: Advice, Android, Apple, Atlassian, Backup, BitBucket, Blog, Business, Cloud, CoronaLabs, Cost, Development, Domain, Firewall, Free, Git, GitHub, Hosting, JIRA, mobile app, MySQL, Networking, NodeJS, OS, Project Management, Scalability, Scalable, Security, Server, Software, Status, Trello, VM Tagged With: ideas

Digital disruption or digital tinkering

December 20, 2016 by Simon

The biggest buzzwords used by prime ministers, presidents or management these days has been “Innovation” and “digital disruption”.  As a developer or manager do you understand what goes into a new digital customer-focused service like an API or data-driven portal? How well is your business or products doing in the age of innovation and digital disruption? Do you listen to what your customers want or need?

When to Pivot

There comes a time when businesses realize they need to pivot in order to stay viable.

  • People don’t rent VHS movies they download movies from the Internet
  • Printing photographs, who does that anymore?
  • People learn from videos on YouTube, Khan Academy for free or pay for courses from, Pluralsight.com, Coursea.org, Udemy.com of Linda.com.
  • Information is wanted 24/7 and a call to customer service if information cannot be sourced online.

kodak-bankruptcy

Image source.

Chances are 90% of your customers are using mobile or tablet devices on any given day.  If you are not interacting with your customers via personalized/mobile technology prepare to be overtaken as a business.

Pivoting may require you to admit you are behind the eight ball and take a risk and set up a new customer-focused web portal, API, app or services.  Make sure you know what you need before the lure of services, buzzwords and  “shiny object syndrome” from innovation blog posts and consultants take hold.

Advocates and blockers of change

Creating change in an organization that bean counts every dollar, exterminates all risk and ignore ideas is a hard sell. How do you get support from this with power and endless rolls of red tape?

Bad reasons for saying no to innovation:

  • You can’t create a mobile app to help customers because the use of our logo won’t be approved.
  • Don’t focus on customer-focused automation, analytics and innovation because internal manual processes need attention first.
  • Possible changes in 2 years outside of our control will possibly impact anything we create.
  • Third eyelids.

Management’s support of experimentation and change is key to innovation. HARVARD BUSINESS REVIEW have a great post on this: The Why, What, and How of Management Innovation.

Does your organization value innovation? This is possibly the best video that describes how the best businesses focus on innovation and take risks. Simon Sinek: How great leaders inspire action.

Here is a great post on How To Identify The Most Dangerous Person In Your Company who blocks innovation and change.

Also a few videos on getting staff on board and motivation and productivity.

Project Perspectives
consultant_001

Project Focus

  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.

I said that three times (because it is important).

Before you begin coding, learn from those who have failed

Here are some of the best tips I have collected from start-ups who have failed.

  • We didn’t spend enough time talking with customers and we’re rolling out features that I thought were great, but we didn’t gather enough input from clients. We didn’t realize it until it was too late. It’s easy to get tricked into thinking your thing is cool. You have to pay attention to your customers and adapt to their needs.
  • The cloud is great. Outsourcing is great. Unreliable services aren’t. The bottom line is that no one cares about your data more than you do – there is no replacement for a robust due diligence process and robust thought about avoiding reliance on any one vendor.
  • Your heart doesn’t get satisfied with any levels of development. Ignore your heart. Listen to your brain.
  • You can always iterate and extrapolate later. Wet your feet asap.
  • As the product became more and more complex, the performance degraded. In my mind, speed is a feature for all web apps so this was unacceptable, especially since it was used to run live, public websites. We spent hundreds of hours trying to speed up the app with little success. This taught me that we needed to having benchmarking tools incorporated into the development cycle from the beginning due to the nature of our product.
  • It’s not about good ideas or bad ideas: it’s about ideas that make people talk. Make some aspect of your product easy and fun to talk about, and make it unique.
  • We really didn’t test the initial product enough. The team pulled the trigger on its initial launches without a significant beta period and without spending a lot of time running QA, scenario testing, task-based testing and the like. When v1.0 launched, glitches and bugs quickly began rearing their head (as they always do), making for delays and laggy user experiences aplenty — something we even mentioned in our early coverage.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • Our biggest self-realization was that we were not users of our own product. We didn’t obsess over it and we didn’t love it. We loved the idea of it. That hurt.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.
  • It may seem surprising that a seemingly successful product could fail, but it happens all the time. Although we arguably found product/market fit, we couldn’t quite crack the business side of things. Building any business is hard, but building a business with a single app offering and half of your runway is especially hard.

Buzzwords

The Innovation landscape is full of buzzwords, here are just a few you will need to know.

  • API – Application Program Interface is a method that uses web address ( http://www.server.com/api/important/action/) to accept requests and deliver results. Learn more about API’s here http://www.programmableweb.com/category/all/apis
  • AR – Augmented reality is where you use a screen on a mobile, tablet or PC to overlay 3D or geospatial information.
  • Big Data – Is about taking a wider view of your business data to find insights and to predict and improve products and services.
  • BYOD – Bring your own device.
  • BYOC – Bring your own cloud.
  • Caching – Using software to deliver data from memory rather than from slower database each time.
  • Cloud – Someone else’s computer that you run software or services on.
  • CouchDB – An Apache designed Key/Value NoSQL JSON store database that focuses on eventual replication.
  • DaaS – Desktop as a service
  • DbaaS – Database as a service (hardware and database software maintained by others but your data).
  • DBMS – Database Management System – the GUI
  • HPC – High-Performance Computing.
  • IaaS – Cloud-based Servers and infrastructure (Google Cloud, Amazon AWS, Digital Ocean and Vultr and Rackspace).
  • IDaaS – Third Party Authorisation management
  • IOPS – Operations per Second –What limitations are on the interface or software in question.
  • IoT – Internet of things are small devices that can display, sense or update information (internet-connected fridge or a button that orders more toilet paper.
  • iPaaS– integration Platform as a Service (software to integrate multiple XaaS)
  • JSON – A better CSV file (read more here)
  • MaaS – Monitoring as a Service (e.g Keymetrics.io)
  • CaaS – Communication as a service (e.g http://www.twillio.com)
  • Micro-services – an existing service that is managed by another vendor (e.g Notifications, login management, email or storage), usually charged by usage.
  • MongoDB – Another Key/Value NoSQL JSON Database that has upfront Replication
  • NoSQL – A No SQL database that stores data in JSON documents instead of normalised related tables.
  • PaaS – A larger stack of SaaS that you can customise from vendors Azure (Active Directory, Compute, Storage Blobs etc), AWS (SQS, RDS, Alasticache, Elastic File System, ), Google Cloud (Compute Engine, App Engine, Data-store ), Rackspace etc.
  • Rate Limiting – Ability to track and limit a user’s request to an API.
  • SaaS – A smaller software component that you can use or integrate (Google Apps, CiscoWebEx, GoTo
  • Scalable – the ability to have a website or service handle thousands to millions of hits and have baked in a way to handle exponential growth.
  • Meeting).
  • Scale Up – Increase the CPU speed and thus workload
  • Scale Out – Adding more servers and distributing the load instead of making servers faster.
  • SQL – A traditional relational database query language.
  • VR – Virtual Reality is where you totally immerse yourself in a 3D world with a head-mounted display.
  • XaaS – Anything as a service.

External or Online Advice

A consultant once joked to our team that their main job was to “Con” and “Insult” you ( CONinSULTant ).  Their main job is to promote what they know/sell and sow seeds of doubt about what you do. Having said that please take my advice with a grain of salt (I am just relaying what I know/prefer).

Consultants need to rapidly convert you to their way of thinking (and services), consultants gloss over what they don’t know and leave you down a happy part solution nirvana (often ignoring your legacy apps or processes, any roadblocks are relished as an opportunity for more money-making).  This is great if you have endless buckets of money and want to rewrite things over and over.

Having consultants design as develop a solution is not all bad but that would make his developer-focused blog post boring.

Microsoft IIS, Apache, NGINX, Lighthttpd are all good web servers but each has a different memory footprint, performance, and features when delivering static v dynamic content and each platform has maximum concurrent users that they can handle a second for a given server configuration.

You don’t need expensive solutions, read this blog post on “How I built an app with 500,000 users in 5 days on a $100 server”

Snip: I assume my apps will be successful. There’s no point in building an app assuming it won’t be successful. I would not be able to sleep if my app gains traction and then dies due to bad tech. I bake minimum viable scalability principles into my app. It’s the difference between happiness and total panic. It’s what I think should be part of an app MVP (Minimum Viable Product).

Blind googling to find the best platform can be misleading as it is hard to compare apples to apples. Take your time and write some code and evaluate for yourself.

  • This guide highly recommends Microsft.NET and IIS Web servers:  https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/
  • This guide says G-WAN, NGINX and Apache are good http://gwan.com/benchmark

Once you start worrying about scalability you start to plan for multiple servers, load balancing, replication and caching be prepared to open your wallet.

I prefer the free NGINX and if I need more grunt down the track I can move to the NGINX Plus as it has loads of advanced scalability and caching options  https://www.nginx.com/products/.
Alternatively, you can use XaaS for everything and have other people worry about the uptime/scaling and data storage but I find that it is inevitable you will need the flexibility of a self-managed server and FULL control of the core processes.

Golden rule = prove it is cheaper/faster/more reliable and don’t just trust someone. 

Common PaaS, SaaS and Self-Managed Server Vendors

Amazon AWS and Azure are the go to cloud vendors who offer robust and flexible offerings.

Azure: https://azure.microsoft.com/en-us/

Amazon AWS: https://aws.amazon.com/

Google cloud has many cloud offerings but product selection is hard. Prices are high and Google tend to kill off products that don’t make money (e.g Google Gears etc).

Google Cloud:

https://cloud.google.com/

Simple Self Managed Servers

If you want a server in the cloud on the cheap Linode and Digital Ocean have you covered.

  • Digital Ocean: http://www.digitalocean.com
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Linode: https://www.linode.com/

High-End Corporate vendors

  • Rackspace: https://www.rackspace.com/en-au/cloud
  • IBM Cloud: http://www.ibm.com/cloud-computing/au/#infrastructure

Other vendors

  • Engineyard: http://www.engineyard.com/
  • Heroku: https://www.heroku.com/
  • Cloud66: http://www.cloud66.com/
  • Parse: DEAD

Moving to Cloud Pro’s

  • Lowers Risk
  • Outsource talent
  • Scale to millions of users/hits
  • Pay for what you use
  • Granular access
  • Potential savings *
  • Lower risk *

Moving to Cloud Con’s

  • Usually billed in USD
  • Limited upload/downloads or API hits a day
  • Intentional tier pain points (Limited storage, hits, CPU, data transfers, Minimum servers).
  • Cheaper multi-tenant servers v expensive dedicated servers with dedicated support
  • Limited IOPS (g 30 API hits a second then $100 per additional 10 Req/sec)
  • XaaS Price changes
  • Not fully integrated (still need code)
  • Latency between Services.
  • Limited access for developers (not granular enough).
  • Security

Vendors can change their prices whenever they want, I had a cluster of MongoDB servers running on AWS (via http://www.mongodb.com/cloud/ ) and one day they said they needed to increase their prices because they underestimated the costs for the AWS servers. They gave me some credit but I was instantly paying more and was also tied to USD (not AUD). A fall in the Australian dollar will impact bills in a big way.

Vendor Uptime:

Not all vendors are stable, do your research on who are the most reliable: https://cloudharmony.com/status

Quick Status Pages of leading vendors.

  • AWS: https://status.aws.amazon.com/
  • Azure: https://azure.microsoft.com/en-us/status/
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Digital Ocean: https://status.digitalocean.com/
  • Google: https://status.cloud.google.com/
  • Heroku: https://status.heroku.com/
  • LiNode: https://status.linode.com/
  • Cloud66: http://status.cloud66.com/

Some vendors have patchy uptime

consultant_003

Management Software and Support:

Don’t lock in a vendor(s) until you have tested their services and management interfaces and can accurately forecast your future app costs.

I found that Digital Ocean was the simplest to get started, had capped prices and had the best documentation. However, Digital Ocean do not sell advanced services or advanced support and they did not have servers in Australia.

Google Cloud left a lot to be desired with product selection, setup and documentation. It did not take me long to realize I would be paying a lot more on Google platforms.

Azure was quite clean and crisp but lacked controls I was looking for. Azure is designed to be simple with a professional appearance (I found the default security was not high enough for me unmanaged Ubuntu Servers).  Azure was 4x the cost of Digital Ocean servers and 2x the cost of AWS.

AWS management interfaces were very confusing at first but support was not far away online.  AWS seemed to have the most accurate cost estimators and developer tools to make it my default choice.

Free Trials

When searching for a cloud provider to test look for free trials and have a play before you decide what is best.

https://aws.amazon.com/free/ – 12 Month free trial.

https://azure.microsoft.com/en-us/free/ –  $200 credit.

Digital Ocean 2 moths free for new customers.

Cloudant offered $50 free a month for a single multi-tenant NoSQL database but after as IBM acquisition, the costs seem steep (Financing is available through so it must be expensive). I walked away from IBM because it was going to cost me $4,000 a month for 1 dedicated Cloudant CouchDB Node.

Costs

It is hard to forecast your costs if you do not know what components you will use, what the CPU activity will be and what data will be delivered.

Google and AWS have a confusing mix of base rates, CPU credits, and data costs. You can boost your credits and usage but it will cost you compared to a flat rate server cost.

Digital Ocean and Linode offer great low rates for unmanaged servers and reasonable extra charges other vendors will scalp from the get go but lack the global presence.

Azure is a tad more expensive than AWS and a lot higher than Digital Ocean

At some point you need to spin up some servers and play around and if you need to change to another vendor.  I was tempted by IBM Cloud Ant CouchDB DBaaS but it would have been $4000 USD a month. (it did come with 24/7 techs that monitored the service for me).

Databases

Relational databases like MySQL and SQL Server are solid choices but replication can be tricky. See my guide here.

  • NoSQL database are easier to scale up and out but more care has to be given to the software controlling the data and collisions, Relational databases are harder to scale but are by designed to enforce referential integrity.

Design what you need and then chose a Relational, NoSQL or Mix of databases.  A good API will join a mix of databases but deliver the best of both worlds.

E.g Geographic data may best be served from MongoDB but related customer data from MySQL or MS SQL Server

Database cost will also impact your database decisions. E.g Why set up a SQL Server when a MySQL will do, why set up a Mongo DB cluster when a single MongoDB instance will do.

Also when you scale out the database capabilities vary.

  • Availability – Each client can always read and write data.
  • Consistency – All clients have the same view of the data
  • Partition Tolerance – The System works well despite physical network partitions.

nosql-triangle

Database decisions will impact the code and complexity of your application.

Website and API Endpoint

The website will be the glue that sticks all the pieces together.  An API on a web server ( e.g  https://www.myserver.com/api/v1/do/domething/important ) may trigger these actions.

  1. Check the request origin (Ip ban) – Check IP cache or request new IP lookup
  2. Validate SSL status.
  3. Check the users login tokens (are they logged in) – log output
  4. Check a database (MYSQL)
  5. Check for permissions – is this action allowed to happen?
  6. Check for rate-limiting – had a threshold been exceeded.
  7. Check another database (MongoDB)
  8. Prepare data
  9. Resolve the API request – return the data.

A Web server then becomes very important as it is managing lot. If you decided to use a remote “as a service”  ID management API or application endpoint would each of the steps happen in a reasonable time-frame.  StormPath may be great service for IP auth but I had issues with reliability early on and costs were unpredictable, Google firebase is great at Application endpoints but they can be expensive.

Carefully evaluate the pro’s and cons of going DIY/self-managed versus a mix of “as a service” and full “as a service”.

I find that NGINX and NodeJS is the perfect balance between cost, flexibility, and scalability and risk [ link to my scalability guide ] NodeJS is great for integrating MySQL, API or MongoDB calls into the back end in a non-blocking way.  NodeJS can easily integrate caching and connection pooling to enhance throughput.

Mulesoft is a good (but expensive) API development suite https://www.mulesoft.com/platform/api

Location, Latency and Local Networks.

You will want to try and keep all of your servers and services as close as possible, don’t spin up a digital ocean server in Singapore if your customers are in Australia (the NETFLIX effect will see latency drop off a cliff at night). Also having a database on one vendor and a web server on another vendor may add extra latency issues, try and use the same vendor in the same data centre.

Don’t forget SSL will add about 40ms to any request locally (and up to 200ms for overseas serves), that does impact maximum concurrent users (but you need strong SSL).

  • Application scalability on a budget (my journey)
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
  • The quickest way to setup a scalable development ide and web server
  • More here: https://fearby.com/

Also, remember the servers may have performance limitations (maximum IOPS ) sometimes you need to pay for higher IOPS or performance to get better throughput.

Security

Ensure that everything is secure, logged and you have some sort of IP banning or rate-limiting and session tokens/expiry and or auto log out.

Your servers need to be patched and potential exploits monitored, don’t delay updating software like MySQL and OpenSSL when exploits are known.

Consider getting advice from a company like https://www.whitehack.com.au/ where they can review your code and perform penetration testing.

  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Update OpenSSL on a Digital Ocean VM
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

You may want to limit the work you do on authorization management and get a third party to do https://www.okta.com/ or http://www.stormpath.com can help here.

You will certainly need to implement two-factor authentication,  OAuth 2, session tokens, forward security, rate limiting, IP logging, polymorphic data return via API.  Security is a big one.

Here is a benchmark for an API hit overseas with and without SSL

consultant_002-1

Moving my Digital Ocean Server from Singapore to AWS in Australia dropped my API requests to under 200ms (SSL, complete authorization, logging and payload delivery).

Monitoring and Benchmarking

Monitoring your website’s health (CPU, RAM and Memory) along with software and database monitoring is very important to maintain a service.

https://keymetrics.io/ is a great NodeJS service and API monitoring application.

consultant_004

PM2 is a great node module that integrated Key metrics with NodeJS.

CPU BusySiege is a good command-line benchmark took, check out my guide here.

http://www.loader.io is a great service for hitting your website from across the world.

AWS MongoDB Test

End to End Analytics

You should be capturing analytics from end to end (failed logins, invalid packets, user usage etc).  Caching content and blocking bad uses can them be implemented to improve performance.

Developer access

All platforms have varied access to allow developers in to change things.  I prefer the awesome http://www.c9.io for connecting to my servers.

C9 IDE

If you go with high-level SaaS (Microsft CRM, Sitecore CRM etc) you may be locked into outdated software that is hard for developers to modify and support.

Don’t forget your customers.

At this point, you will have a million thoughts on possible solutions and problems but don’t forget to concentrate on what you are developing and it is viable.  Do you have validated customer needs and will you be working to solve those problems?

Project Pre-Mortem

Don’t be afraid to research what could go wrong, are you about to spend money on adding another layer of software to improve something but not solve the problem at hand?

It is a good idea to quickly guess what could go wrong before deciding on a way forward.

  • Server scalability
  • Features not polished
  • Does not meet customer needs
  • Monetization Issues
  • Unknown usage costs
  • Bad advice from consultants
  • Vendors collapsing or being bought out.

Long game

Make sure you choose a vendor that won’t go broke?  Smaller vendors like Parse were gobbled up by Facebook and Facebook closed their doors leaving customers in the lurch.  Even C9.io has been purchased by AWS and their future is uncertain.  Will Linode and Digital Ocean be able to compete against AWS and Azure? Don’t lock yourself into one solution and always have a backup plan.

Do

  • Do know what your goal is.
  • Make a start.
  • Iterate in public.
  • Test everything.

Don’t

  • Don’t trust what you have been told.
  • Don’t develop without a goal.
  • Don’t be attracted to buzzwords, new tech and shiny objects.

Good luck and happy coding.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Edit v1.11

Filed Under: Backup, Business, Cloud, Development, Hosting, Linux, MySQL, NodeJS, Scalability, Scalable, Security, ssl, Uncategorized Tagged With: digital disruption, Innovation

Connecting to an AWS EC2 Ubuntu instance with Cloud 9 IDE as user ubuntu and root

September 1, 2016 by Simon Fearby

Recently I setup an Amazon EC2 Ubuntu Server instance and wanted to connect it to the awesome Cloud 9 IDE. I was sick of interacting with a server through terminal windows.

Use this link and get $19 free credit with Cloud 9: https://c9.io/c/DLtakOtNcba

c9io15-004

Cloud 9 IDE (sample screenshot)

C9 IDE

Previously I was using Digital Ocean (my Digital Ocean setup guide here) and this was simple, you get a VM and you have a root account and you do what you want.  Amazon AWS however, have extra layers of security that prevent logging in as root via SSH and that can be a pain with Cloud 9 as your workspace tree is restricted to the ~/ (home) folder.

Below are the steps you need to connect to an AWS instance with user “ubuntu” and “root” with Cloud 9.

Connecting to an AWS instance with Cloud 9 as user “ubuntu”

1. Purchase and set-up your AWS instance (my guide here).

2. You need to be able to login to your AWS server from a terminal prompt (from OSX).  This may include opening port 22 the AWS Security Group panel. Info on SSH logins here.

ssh -i ~/.ssh/yourawsicskeypair.pem [email protected]

3. On your AWS server (from step 2) Install NodeJS.

You will know node is installed if you get a version returned when typing the following bash command.

node -v

tip: If node is not installed you can run the Cloud 9 pre-requisites script (that includes node).

curl -L https://raw.githubusercontent.com/c9/install/master/install.sh | bash

4. Ensure you have created SSH key on Cloud 9 (guide here).

5. Copy your Cloud 9 SSH key to the clipboard.

6. On your AWS server (in step 2) edit the ~/.ssh/authorized_keys file and paste in the Cloud 9 SSH key (after you AWS key pair that was added from the setup of AWS) to a new line and save the file.

7. Log in to Cloud 9 and click create Workspace then Remote SSH Workspace.

  • Name your workspace (all lowercase and no spaces).
  • Username: ubuntu
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be ~/

c9io15-000

8. Click Create Workspace

c9io15-002

9. If all goes well you will have a prompt to install the prerequisites.

c9io15-001

If this fails check out the Cloud 9 guide here.

Troubleshooting: I had errors like “Project directory does not exist or is not writable and “Unable to change File System Path in SSH Workspace” because I was trying to set the workspace path as “/” (this is not possible on AWS with the “ubuntu” account.

10. Now you should have a web-based IDE that allows you to browse your server, create and edit files, run termials instances that will reconnect if your net connection or browser tab drops out (you can even go to a different machine and continue with your session).

c9io15-003

Connecting to an AWS instance with Cloud 9 as user “root

Connecting to your server as the “ubuntu” server is fine if you just need to work in your “ubuntu” home folder.  As soon as you want to start changing other settings outside of your home folder you are stuck.  Granting “ubuntu” higher privileges server wide is a bad idea so here is how you can enable “root” login via SSH access.

WARNING: Logging in as ROOT IS BAD, you should only allow Root Login for short periods and it is advisable to remove root login abilities as soon as you do not need them or in production.

Having root access while developing or building a new server saves me bucket loads of time so lets allow it.

1. Follow step 1 to 5 in the steps above (setup AWS, ssh access via terminal, install node, create cloud 9 ssh key, copy the cloud 9 ssh key to the clipboard).

2. SSH to your AWS server and edit the following file:

sudo nano /etc/ssh/sshd_config
# -- Make the following changes
# PermitRootLogin without-password
PermitRootLogin = yes

Save.

3. Backup your root authorised keys file

sudo cp /root/.ssh/authorized_keys /root/.ssh/authorized_keys.bak

4. Edit the root authorized_keys file and paste in your Cloud 9 SSH Key.

c9io15-005

5. Now you can create a Cloud 9 Connection to your server with root

  • Name your workspace (all lowercase and no spaces).
  • Username: root
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be /

c9io15-007

tip:  If you have not added you SSH key correctly you will receive this error when connecting.

c9io15-006

6. You should now be able to connect to AWS ec2 instances with Cloud 9 as root and configure/do anything you want without switching to shell windows.

c9io15-009

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Enjoy

If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.6 security

Filed Under: Cloud, Domain, Hosting, Linux, NodeJS, Security, ssl Tagged With: AWS, c9, cloid, ssh, terminal

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

August 16, 2016 by Simon Fearby

This blog lists the actions I went through to setup an AWS EC2 Ubuntu Server and add the usual applications. This follows on from my guide to setup a Digital Ocean server server guide and Vultr setup guide.

Variable Pricing

Amazon Web Services give developers 12 months of free access to a range of products. Amazon don’ t have flat rate fees like Digital Ocean instead, AWS grant you a minimum level of CPU credits for each server type. A “t2.micro” (1CPU/1GB ram $0.013c/hour server) gets 6 CPU credits an hour.  That is enough for a CPU to run at 10% all the time, you can bank up to 144 CPU credits that you can use when the application spikes.  The “t2.micro” is the free tier server (costs nothing for 12 months), but when the trial runs our that’s $9.50 a month.  The next server up is a “t2.small” (1 CPU, 2GB ram) and you get 12 CCPUredits and hour and can bank 288, that enough for 20% CPU usage all the time.  

The “t2.medium” server (2 CPU’s, 4GB Ram), allows 40% CPU usage credits, 24 CPU credits an hour with 576 bankable.  That server costs $0.052c and hour and is $38 a month. Don’t forget to cache content. 

I used about 40 CPU credits generating a 4096bit secure prime Diffie-Hellman key for an SSL Certificate on my t2.micro server. More Information on AWS Instance pricing here and here.

Creating an AWS Account (Free Trial)

The signup process is simple.

  1. Create a free AWS account.
  2. Enter your CC details (for any non-free services) and submit your account.

It will take 2 days or more for Amazon to manually approve your account.  When you have been approved, navigate to https://console.aws.amazon.com login and set your region in the top right next to your name (in my case I will go with Australia ‘ap-southeast-2‘).

My console home is now: https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-2#LaunchInstanceWizard

Create a Server

You can follow the prompts to set up a free tier EC2 Ubuntu server here.

1. Choose Ubuntu EC2

2. Choose Instance Type: t2-micro (1x CPU, 1GB Ram)

3. Configure Instance: 1

4. Add Storage: /dev/sda1, 8GB+, 10-3000 IOPS

5. Tag Instance: Your own role specific tags

6. Configure Security Group: Default firewall rules.

7. Review

Tip: Create a 25GB volume (instead of 8GB) or you will need to add an extra volume mount it with the following commands.

sudo nano /etc/fstab
(append the following line)
/dev/xvdf    /mydata    ext4    defaults,nofail    0    2
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /mydata
mount -a
cd /mydata
ls -al

Part of theEC2 server setup was to save a .PEM file to your SSH folder on your local PC ( ~/.ssh/mysererkeypair.pem).

You will need to secure the file:

chmod 400 ~/.ssh/mysererkeypair.pem

Before we connect to the server we need to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

DNS

You will need to assign a status IP to your server (apparently the public IP is not static). Here is a good read on connecting a domain name to the IP and assigning an elastic IP to your server.  Once you have assigned an elastic IP you can point your domain to your instance.

AWS IP

Installing the Amazon Command Line Interface utils on your local PC

This is required to see your servers console screen and then connect via SSH.

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
/usr/local/bin/aws --version

You now need to configure your AWS CLI, first generate Access Keys here. While you are there setup Multi-Factor Authentication with Google Authenticator.

aws configure
> AWS Access Key ID: {You AWS API Key}
> AWS Secret Access Key: {You AWS Secret Accss Key}
> Default region name: ap-southeast-2 
> Default output format: text

Once you have configured your CLI you can connect and review your Ubuntu console output (the instance ID can be found in your EC2 server list).

aws ec2 get-console-output --instance-id i-123456789

Now you can hopefully connect to your server and accept any certificates to finish the connection.

ssh -i ~/.ssh/myserverpair.pem [email protected]

AWS Console

Success, I can now access my AWS Instance.

Setting the Time and Daylight Savings.

Check your time.

sudo hwclock --show
> Fri 21 Oct 2016 11:58:44 PM AEDT  -0.814403 seconds

My Daylight savings have not kicked in.

Install ntp service

sudo apt install ntp

Set your Timezone

dpkg-reconfigure tzdata

Go to http://www.pool.ntp.org/zone/au and find my NTP server (or go here if you are outside Australia)

server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org
server 3.au.pool.ntp.org

Add the NTP servers to “/etc/ntp.conf” and restart he NTP service.

sudo service ntp restart

Now check your time again and you should have the right time.

sudo hwclock --show
> Fri 21 Oct 2016 11:07:38 PM AEDT  -0.966273 seconds

🙂

Installing NGINX

I am going to be installing the latest v1.11.1 mainline development (non-legacy version). Beware of bugs and breaking changes here.

sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

NGINX is now installed. Try and get to your domain via port 80 (if it fails to load, check your firewall).

Installing NodeJS

Here is how you can install the latest NGINX (development build), beware of bugs and frequent changes. Read the API docs here.

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

NodeJS is installed.

Installing MySQL

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
sudo mysql_install_db
sudo mysql_secure_installation
service mysql status

Install PHP 7.x and PHP7.0-FPM

I am going to install PHP 7 due to the speed improvements over 5.x.  Below were the commands I entered to install PHP (thanks to this guide)

sudo add-apt-repository ppa:ondrej/php
sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm
sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

NGINX Configuration

NGINX can be a bit tricky to setup for newbies and your configuration will certainly be different but here is mine (so far):

File: /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /var/run/nginx.pid;
events {
        worker_connections 1024;
        multi_accept on;
}
http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;
        client_body_buffer_size      128k;
        client_max_body_size         10m;
        client_header_buffer_size    1k;
        large_client_header_buffers  4 4k;
        output_buffers               1 32k;
        postpone_output              1460;

        proxy_headers_hash_max_size 2048;
        proxy_headers_hash_bucket_size 512;

        client_header_timeout  1m;
        client_body_timeout    1m;
        send_timeout           1m;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        # ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        gzip on;
	gzip_disable "msie6";
	gzip_vary on;
	gzip_proxied any;
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_min_length 256;
	gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6

        access_log /var/log/nginx/myservername.com.log;

        root /usr/share/nginx/www;
        index index.php index.html index.htm;

        server_name www.myservername.com myservername.com localhost;

        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking

        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;


        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;

        # This is handled with the header above.
        #rewrite ^/(.*) https://myservername.com/$1 permanent;

        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }

        fastcgi_param PHP_VALUE "memory_limit = 512M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
                try_files $uri =404;

                # include snippets/fastcgi-php.conf;

                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }

        # deny access to .htaccess files, if Apache's document root
        #location ~ /.ht {
        #       deny all;
        #}
}

Test and Reload NGINX Config

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Don’t forget to test PHP with a script that calls ‘phpinfo()’.

Install PhpMyAdmin

Here is how you can install the latest branch of phpmyadmin into NGINX (no apache)

cd /usr/share/nginx/www
mkdir my/secure/secure/folder
cd my/secure/secure/folder
sudo apt-get install git
git clone --depth=1 --branch=STABLE https://github.com/phpmyadmin/phpmyadmin.git

If you need to import databases into MySQL you will need to enable file uploads in PHP and set file upload limits.  Review this guide to enable uploads in phpMyAdmin.  Also if your database is large you may also need to change the “client_max_body_size” settings on nginx.conf ( see guide here ). Don’t forget to disable uploads and reduce size limits in NGINX and PHP when you have uploaded databases.

Note: phpMyAdmin can be a pain to install so don’t be afrait of using an alternative management gui.  Here is a good list of MySQL management interfaces. Also check your OS App store for native mysql database management clients.

Install an FTP Server

Follow this guide here then..

sudo nano /etc/vsftpd.conf
write_enable=YES
sudo service vsftpd restart

Install: oracle-java8 (using this guide)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
cd /usr/lib/jvm/java-8-oracle
ls -al
sudo nano /etc/environment
> append "JAVA_HOME="/usr/lib/jvm/java-8-oracle""
echo $JAVA_HOME
sudo update-alternatives --config java

Install: ncdu – Interactive tree based folder usage utility

sudo apt-get install ncdu
sudo ncdu /

Install: pydf – better quick disk check tool

sudo apt-get install pydf
sudo pydf

Install: rcconf – display startup processes (handy when confirming pm2 was running).

sudo apt-get install rcconf
sudo rcconf

I started “php7.0-fpm” as it was not starting on boot.

I had an issue where PM2 was not starting up at server reboot and reporting to https://app.keymetrics.io.  I ended up repairing the /etc/init.d/pm2-init.sh as mentioned here.

sudo nano /etc/init.d/pm2-init.sh
# edit start function to look like this
...
start() {
    echo "Starting $NAME"
    export PM2_HOME="/home/ubuntu/.pm2" # <== add this line
    super $PM2 resurrect#
}
...

Install IpTraf – Network Packet Monitor

sudo apt-get install iptraf
sudo iptraf

Install JQ– JSON Command Line Utility

sudo apt-get install jq
# Downlaod and display json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Install Ruby – Below commands a bit out of order due to some command not working for unknown reasons.

sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
sudo gem install bundler
sudo git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
sudo git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
sudo git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
sudo echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
sudo echo 'eval "$(rbenv init -)"' >> ~/.bashrc
sudo exec $SHELL
sudo rbenv install 2.2.3
sudo rbenv global 2.2.3
sudo rbenv rehash
ruby -y
sudo apt-get install ruby-dev

Install Twitter CLI – https://github.com/sferik/t

sudo gem install t
sudo t authorize# follow the prompts

Mutt (send mail by command line utility)

Help site: https://wiki.ubuntu.com/Mutt

sudo nano /etc/hosts
127.0.0.1 localhost localhost.localdomain xxx.xxx.xxx.xxx yourdomain.com yourdomain

Configuration:

sudo apt-get install mutt
[configuration none]
sudo nano /etc/postfix/main.cf
[setup a]

Configure postfix guide here

Extend the History commands history

I love the history command and here is how you can expand it’s hsitory and ignore duplicates.

HISTSIZE=10000
HISTCONTROL=ignoredups

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

Cont…

Next: I will add an SSL cert, lock down the server and setup Node Proxies.

If this guide was helpful please consider donating a few dollars to keep me caffeinated.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.61 added vultr guide link

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Security Tagged With: AWS, server

Application scalability on a budget (my journey)

August 12, 2016 by Simon Fearby

If you have read my other guides on https://www.fearby.com you may tell I like the self-managed Ubuntu servers you can buy from Digital Ocean for as low as $5 a month  (click here to get $10 in free credit and start your server in 5 minutes ). Vultr has servers as low as $2.5 a month. Digital Ocean is a great place to start up your own server in the cloud, install some software and deploy some web apps or backend (API/databases/content) for mobile apps or services.  If you need more memory, processor cores or hard drive storage simple shutdown your Digital Ocean server, click a few options to increase your server resources and you are good to go (this is called “scaling up“). Don’t forget to cache content to limit usage.

This scalability guide is a work in progress (watch this space). My aim is to get 2000 concurrent users a second serving geo queries (like PokeMon Go) for under $80 a month (1x server and 1x mongoDB cluster).  Currently serving 600~1200/sec.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Estimating Costs

If you don’t estimate costs you are planning to fail.

"By failing to prepare you are preparing to fail." - Benjamin Frankin

Estimate the minimum users you need to remain viable and then the expected maximum uses you need to handle. What will this cost?

Planning for success

Anyone who has researched application scalability has come across articles on apps that have launched and crashed under load at launch.  Even governments can spend tens of millions on developing a scalable solution, plan for years and fail dismally on launch (check out the Australian Census disaster).  The Australian government contracted IBM to develop a solution to receive up to 15 million census submissions between the 28th of July to the 5th of September. IBM designed a system and a third party performance test planned up to 400 submissions a second but the maximum submissions received on census night before the system crashed was only o154 submissions a second. Predicting application usage can be hard, in the case of the Australian census the bulk of people logged on to submit census data on the recommended night of the 9th of August 2016.

Sticking to a budget

This guide is not for people with deep pockets wanting to deploy a service to 15 million people but for solo app developers or small non-funded startups on a serious budget.  If you want a very reliable scalable solution or service provider you may want to skip this article and check out services by the following vendors.

  • Firebase
  • Azure (good guides by Troy Hunt: here, here and here).
  • Amazon Web Services
  • Google Cloud
  • NGINX Plus

The above vendors have what seems like an infinite array of products and services that can form part of your solution but beware, the more products you use the more complex it will be and the higher the costs.  A popular app can be an expensive app. That’s why I like Digital Ocean as you don’t need a degree to predict and plan you servers average usage and buy extra resource credits if you go over predicted limits. With Digital Ocean you buy a virtual server and you get known Memory, Storage and Data transfer limits.

Let’s go over topics that you will need to consider when designing or building a scalable app on a budget.

Application Design

Your application needs will ultimately decide the technology and servers you require.

  • A simple business app that shares events, products and contacts would require a basic server and MySQL database.
  • A turn-based multiplayer app for a few hundred people would require more server resources and endpoints (a NGINX, NODEJS and an optimized MySQL database would be ok).
  • A larger augmented reality app for thousands of people would require a mix of databases and servers to separate the workload (a NGINX webserver and NodeJS powered API talking to a MySQL database to handle logins and a single server NoSQL database for the bulk of the shared data).
  • An augmented reality app with tens of thousands of users (a NGINX web server, NodeJS powered API talking to a MySQL database to handle logins and NoSQL cluster for the bulk of the shared data).
  • A business critical multi-user application with real-time chat – are you sure you are on a budget as this will require a full solution from Azure Firebase or Amazon Web Serers.

A native app, hybrid app or full web app can drastically change how your application works ( learn the difference here ).

Location, location, location.

You want your server and resources to be as close to your customers as possible, this is one rule that cannot be broken. If you need to spend more money to buy a server in a location closer to your customers do it.

My Setup

I have a Digital Ocean server with 2 cores and 2GB of ram in Singapore that I use to test and develop apps. That one server has MySQL, NGINX, NodeJS , PHP and many scripts running on it in the background.  I also have a MongoDB cluster (3 servers) running on AWS in Sydney via MongoDB.com.  I looked into CouchDB via Cloudant but needed the Geo JSON features with fair dedicated pricing. I am considering moving to an Ubuntu server off Digital Ocean (in Singapore) and onto AWS server (in Sydney). I am using promise based NodeJS calls where possible to prevent non blocking calls to the operating system, database or web.  Update: I moved to a Vultr domain (article here)

Here is a benchmark for HTTP and HTTPS request from Rural NSW to Sydney Australia, then Melbourne, then Adelaide the Perth then Singapore to a Node Server on an NGINX Server that does a call back to Sydney Australia to get a GeoQuery from a large database and return to back to the customer via Singapore.

SSL

SSL will add processing overheads and latency period.

Here is a breakdown of the hops from my desktop in Regional NSW making a network call to my Digital Ocean Server in Singapore (with private parts redacted or masked).

traceroute to destination-server-redacted.com (###.###.###.##), 64 hops max, 52 byte packets
 1  192-168-1-1 (192.168.1.1)  11.034 ms  6.180 ms  2.169 ms
 2  xx.xx.xx.xxx.isp.com.au (xx.xx.xx.xxx)  32.396 ms  37.118 ms  33.749 ms
 3  xxx-xxx-xxx-xxx (xxx.xxx.xxx.xxx)  40.676 ms  63.648 ms  28.446 ms
 4  syd-gls-har-wgw1-be-100 (203.221.3.7)  38.736 ms  38.549 ms  29.584 ms
 5  203-219-107-198.static.tpgi.com.au (203.219.107.198)  27.980 ms  38.568 ms  43.879 ms
 6  tengige0-3-0-19.chw-edge901.sydney.telstra.net (139.130.209.229)  30.304 ms  35.090 ms  43.836 ms
 7  bundle-ether13.chw-core10.sydney.telstra.net (203.50.11.98)  29.477 ms  28.705 ms  40.764 ms
 8  bundle-ether8.exi-core10.melbourne.telstra.net (203.50.11.125)  41.885 ms  50.211 ms  45.917 ms
 9  bundle-ether5.way-core4.adelaide.telstra.net (203.50.11.92)  66.795 ms  59.570 ms  59.084 ms
10  bundle-ether5.pie-core1.perth.telstra.net (203.50.11.17)  90.671 ms  91.315 ms  89.123 ms
11  203.50.9.2 (203.50.9.2) 80.295 ms  82.578 ms  85.224 ms
12  i-0-0-1-0.skdi-core01.bx.telstraglobal.net (Singapore) (202.84.143.2)  132.445 ms  129.205 ms  147.320 ms
13  i-0-1-0-0.istt04.bi.telstraglobal.net (202.84.243.2)  156.488 ms
    202.84.244.42 (202.84.244.42)  161.982 ms
    i-0-0-0-4.istt04.bi.telstraglobal.net (202.84.243.110)  160.952 ms
14  unknown.telstraglobal.net (202.127.73.138)  155.392 ms  152.938 ms  197.915 ms
15  * * *
16  destination-server-redacted.com (xx.xx.xx.xxx)  177.883 ms  158.938 ms  153.433 ms

160ms to send a request to the server.  This is on a good day when the Netflix Effect is not killing links across Australia.

Here is the route for a call from the server above to the MongoDB Cluster on an Amazon Web Services in Sydney from the Digital Ocean Server in Singapore.

traceroute to redactedname-shard-00-00-nvjmn.mongodb.net (##.##.##.##), 30 hops max, 60 byte packets
 1  ###.###.###.### (###.###.###.###)  0.475 ms ###.###.###.### (###.###.###.###)  0.494 ms ###.###.###.### (###.###.###.###)  0.405 ms
 2  138.197.250.212 (138.197.250.212)  0.367 ms 138.197.250.216 (138.197.250.216)  0.392 ms  0.377 ms
 3  unknown.telstraglobal.net (202.127.73.137)  1.460 ms 138.197.250.201 (138.197.250.201)  0.283 ms unknown.telstraglobal.net (202.127.73.137)  1.456 ms
 4  i-0-2-0-10.istt-core02.bi.telstraglobal.net (202.84.225.222)  1.338 ms i-0-4-0-0.istt-core02.bi.telstraglobal.net (202.84.225.233)  3.817 ms unknown.telstraglobal.net (202.127.73.137)  1.443 ms
 5  i-0-2-0-9.istt-core02.bi.telstraglobal.net (202.84.225.218)  1.270 ms i-0-1-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.157)  50.869 ms i-0-0-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.153)  49.789 ms
 6  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  107.395 ms  108.350 ms  105.924 ms
 7  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  105.911 ms 21459.tauc01.cu.telstraglobal.net (134.159.124.85)  108.258 ms  107.337 ms
 8  21459.tauc01.cu.telstraglobal.net (134.159.124.85)  107.330 ms unknown.telstraglobal.net (134.159.124.86)  101.459 ms  102.337 ms
 9  * unknown.telstraglobal.net (134.159.124.86)  102.324 ms  102.314 ms
10  * * *
11  54.240.192.107 (54.240.192.107)  103.016 ms  103.892 ms  105.157 ms
12  * * 54.240.192.107 (54.240.192.107)  103.843 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

It appears Telstra Global or AWS block the tracking of network path closer to the destination so I will ping to see how long the trip takes

bytes from ec2-##-##-##-##.ap-southeast-2.compute.amazonaws.com (##.##.##.##): icmp_seq=1 ttl=50 time=103 ms

It is obvious the longest part of the response to the client is not the GeoQuery on the MongoDB cluster or processing in NodeJS but the travel time for the packet and securing the packet.

My server locations are not optimal, I cannot move the AWS MongoDB to Singapore because MongoDB doesn’t have servers in Singapore and Digital Ocean don’t have servers in Sydney.  I should move my services on Digital Ocean to Sydney but for now, let’s see how far this Digital Ocean Server in Singapore and MongoDB cluster in Sydney can go. I wish I knew about Vultr as they are like Digital Ocean but have a location in Sydney.

Security

Secure (SSL) communication is now mandatory for Apple and Android apps talking over the internet so we can’t eliminate that to speed up the connection but we can move the server. I am using more modern SSL ciphers in my SSL certificate so they may slow down the process also. Here is a speed test of my servers cyphers. If you use stronger security so I expect a small CPU hit.

cipherspeed

fyi: I have a few guides on adding a commercial SSL certificate to a Digital Ocean VM and Updating OpenSSL on a Digital Ocean VM. Guide on configuring NGINX SSL and SSL. Limiting ssh connection rates to prevent brute force attacks.

Server Limitations and Benchmarking

If you are running your website on a shared server (e.g CPanel domain) you may encounter resource limit warnings as web hosts and some providers want to charge you more for moderate to heavy use.

Resource Limit Is Reached 508
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I have never received a resource limit reached warning with Digital Ocean.

Most hosts  (AWS/Digital Ocean/Azure etc) all have limitations on your server and when you exceed a magical limit they restrict your server or start charging excess fees (they are not running a charity).  AWS and Azure have different terminology for CPU credits and you really need to predict your applications CPU usage to factor in the scalability and monthly costs. Servers and databases generally have a limited IOPS (Input/Output operations a second) and lower tier plans offer lower IOPS.  MongoDB Atlas lower tiers have < 120 IOPS a second, middle tiers have  240~2400 IOPS and higher tiers have 3,000,20,000 IOPS

Know your bottlenecks

The siege HTTP stress testing tool is good, the below command will throw 400 local HTTP connections to your website.

#!/bin/bash
siege -t1m -c400 'http://your.server.com/page'

The results seem a bit low: 47.3 trans/sec.  No failed transactions through 🙂

** SIEGE 3.0.5
** Preparing 400 concurrent users for battle.
The server is now under siege...
Lifting the server siege.. done.

Transactions: 2803 hits
Availability: 100.00 %
Elapsed time: 59.26 secs
Data transferred: 79.71 MB
Response time: 7.87 secs
Transaction rate: 47.30 trans/sec
Throughput: 1.35 MB/sec
Concurrency: 372.02
Successful transactions: 2803
Failed transactions: 0
Longest transaction: 8.56
Shortest transaction: 2.37

Sites like http://loader.io/ allow you to hit your web server or web page with many hits a second from outside of your server.  Below I threw 50 concurrent users at a node API endpoint that was hitting a geo query on my MongoDB cluster.

nodebench50c

The server can easily handle 50 concurrent users a second. Latency is an issue though.

nodebench50b

I can see the two secondary MongoDB servers being queried 🙂

nodebench50a

Node has decided to only use one CPU under this light load.

I tried 100 concurrent users over 30 seconds. CPU activity was about 40% of one core.

nodebench50d

I tried again with a 100-200 concurrent user limit (passed). CPU activity was about 50% using two cores.

nodebench50e

I tried again with a 200-400 concurrent user limit over 1 minute (passed). CPU activity was about 80% using two cores.

nodebench50f

nodebench50g

It is nice to know my promised based NodeJS code can handle 400 concurrent users requesting a large dataset from GeoJSON without timeouts. The result is about the same as siege (47.6 trans/sec) The issue now is the delay in the data getting back to the user.

I checked the MongoDB cluster and I was only reaching 0.17 IOPS (maximum 100) and 16% CPU usage so the database cluster is not the bottleneck here.

nodebench50h

Out of curiosity, I ran a 400 connection benchmark to the node server over HTTP instead of HTTPS and the results were near identical (400 concurrent connections with 8000ms delay).

I really need to move my servers closer together to avoid the delays in responding. 47.6 served geo queries (4,112,640 a day) a second with a large payload is ok but it is not good enough for my application yet.

Limiting Access

I may limit access to my API based on geo lookups ( http://ipinfo.io is a good site that allows you to programmatically limit access to your app services) and auth tokens but this will slow down uncached requests.

Scale Up

I can always add more cores or memory to my server in minutes but that requires a shutdown. 400 concurrent users do max my CPU and raise the memory to above 80% so adding more cores and memory would be beneficial.

Digital Ocean does allow me to permanently or temporarily raise the resources of the virtual machine. To obtain 2 more cores (4) and 4x the memory (8GB) I would need to jump to the $80/month plan and adjust the NGINX and Node configuration to use the more cores/ram.

nodebench50i

If my app is profitable I can certainly reinvest.

Scale Out

With MongoDB clusters, I can easily clone ( shard ) a cluster and gain extra throughput if I need it, but with 0.17% of my existing cluster being utilised I should focus on moving servers closer together.

NGINX does have commercial level products that handle scalability but this costs thousands. I could scale out manually by setting up a Node Proxies to point to multiple servers that receive parent calls. This may be more beneficial as Digital Ocean servers start at $5 a month but this would add a whole lot of complexity.

Cache Solutions

  • Nginx Caching
  • OpCache if you are using PHP.
  • Node-cache – In memory caching.
  • Redis – In memory caching.

Monitoring

Monitoring your server and resources is essential in detecting memory leaks and spikes in activity. HTOP is a great monitoring tool from the command line in Linux.

http://pm2.keymetrics.io/ is a good node package monitoring app but it does go a bit crazy with processes on your box.

CPU Busy

Communication

It is a good idea to inform users of server status and issues with delayed queries and when things go down inform people early. Update: Article here on self-service status pages.

censisfail

The Future

UPDATE: 17th August 2016

I set up an Amazon Web Services ECS server ( read AWS setup guide here ) with only 1 CPU and 1GB ram and have easily achieved 700 concurrent connections.  That’s 41,869 geo queries served a minute.

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

AWS MongoDB Test

The MongoDB Cluster CPU was 25% usage with  200 query opcounters on each secondary server.

I think I will optimize the AWS OS ‘swappiness’ and performance stats and aim for 2000 queries.

This is how many hits I can get with the CPU remaining under 95% (794 geo serves a second). AMAZING.

AWS MongoDB Test

Another recent benchmark:

AWS Benchmark

UPDATE: 3rd Jan 2017

I decided to ditch the cluster of three AWS servers running MongoDB and instead setup a single MongoDB instance on an Amazon t2.medium server (2 CPU/4GB ram) server for about $50 a month. I can always upgrade to the AWS MongoDB cluster later if I need it.

Ok, I just threw 2000 concurrent users at the new AWS single server MongoDB server and the server was able to handle the delivery (no dropped connections but the average response time was 4,027 ms, this is not ideal but this is 2000 users a second (and that is after API handles the ip (banned list), user account validity, last 5 min query limit check (from MySQL), payload validation on every field and then  MongoDB geo query).

scalability_budget_2017_001

The two cores on the server were hitting about 95% usage. The benchmark here is the same dataset as above but the API has a whole series of payload, user limiting, and logging

Benchmarking with 1000 maintained users a second the average response time is a much lower 1,022 ms. Honestly, if I have 1000-2000 users queries a second I would upgrade the server or add in a series of lower spec AWS t2.miro servers and create my own cluster.

Security

Cheap may not be good (hosting or DIY), do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added self-service status page info and Vultr info

Short: https://fearby.com/go2/scalability/

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Scalability, Scalable, Security, ssl, VM Tagged With: api, digital ocean, mongodb, scalability, server

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT