• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

AWS

Setting up a Digital Ocean Droplet as a sub domain on an AWS Domain

July 15, 2017 by Simon

This guide hopes to show you how to set up a Digital Ocean Droplet (server) as a Sub Domain on an existing AWS domain. I am setting up a Digital Ocean Domain as a subdomain (both existing) and using the subdomain (Digital Ocean server) as a self-service status page. I have set up both domains with SSL certificates and strong Content Security Policies and Public Key Pinning.

Read this newer post on setting up Subdomains.

DO: Obtain the IP addresses for your Digital Ocean Droplet (that will be the subdomain). If you don’t already have a Digital Ocean Droplet click here (and get 2 months free).

Login to your AWS Console for the parent domain. I have a guide on setting up an AWS domain here and Digital Ocean Domain here.

This AWS guide was a handy start Creating a Subdomain That Uses Amazon Route 53 as the DNS Service without Migrating the Parent Domain.

From the AWS Route 53 screen, I clicked Get started now.

From here you can Create a Hosted Zone.

Create a Hosted Zone.

A subdomain can be created on AWS Route 53.

I created a route 53 A Name record and pointed it to a known Digital Ocean droplet IP address.

I created an A Name record on Digital Ocean for the droplet (e.g status.______.com).

I created an IPV6 (AAAA) record on Digital Ocean for the droplet?

I could not ping the server so I added the digital ocean name servers to the route 53 record set out of desperation.

Final Route information on AWS.

Hmm, nothing works as of yet.

https://www.whatsmydns.net is not showing movement yet.

Time to contact AWS for advice.

I tried to post a help post on the AWS forums but apparently, a user who has been paying AWS for 6 months does not have the right to post a new forum thread.

I posted a few helpful questions on twitter and I’ll try these out tonight.

Replies

And…

Thanks, guys, I’ll try these tonight and update this post.

I created a recordset for the parent domain on AWS and A record for the Digital Ocean subdomain with no luck.

This post will be updated soon.

Read my guide on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

v1.9 added info on let’s encrypt (10:38pm 29th July 2017 AEST)

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: AWS, Digital Ocean, DNS, Route53 Tagged With: AWS, digital ocean, DNS

How to upgrade an AWS free tier EC2 t2.micro instance to an on demand t2.medium server

July 9, 2017 by Simon

Amazon Web Services have a great free tier where you can develop with very little cost. The free tier Linux server is a t2.micro server (1 CPU, low to moderate IO, 1GB memory with 750 hours or CPU usage). The free tier limits are listed here. Before you upgrade can you optimize or cache content to limit usage?

When you are ready to upgrade resources you can use this cost calculator to set your region (I am in the Sydney region) and estimate your new costs.

awsupgrade001

You can also check out the ec2instances.info website for regional prices.

Current Server Usage

I have an NGINX Server with a NodeJS back powering an API that talks to a local MySQL database and Remote MongoDB Cluster ( on AWS via http://www.mongodb.com/cloud/ ). The MongoDB cluster was going to cost about $120 a month (too high for testing an app before launch).  The Free tier AWS instance is running below the 20% usage limit so this costs nothing (on the AWS free tier).

You can monitor your instance usage and credit in the Amazon Management Console, keep an eye on the CPU usage and CPU Credit and CPU Credit Balance.  If the CPU usage grows and balance drops you may need to upgrade your server to prevent usage charges.

AWS Upgrade

I prefer the optional htop command in Ubuntu to keep track of memory and processes usage.

Basic information from AWS t1.micro (idle)

AWS Upgrade

Older htop screenshot of a dual CPU VM being stressed.

CPU Busy

Future Requirements

I plan on moving a non-cluster MongoDB database onto an upgraded AWS free tier instance and develop and test there and when and if a scalable cluster is needed I can then move the database back to http://www.mongodb.com/cloud/.

First I will need to upgrade my Free tier EC2 instance to be able to install MongoDB 3.2 and power more local processing. Upgrading will also give me more processing power to run local scripts (instead of hosting them in Singapore on Digital Ocean).

How to upgrade a t2.micro to t2.medium

The t2.medium server has 2 CPU’s, 4 GB of memory and Low to Moderate IO.

  1. Backup your server in AWS (and manual backup).
  2. Shutdown the server with the command sudo shutdown now
  3. Login to your AWS Management Console and click Instances (note: Your instance will say it is running but it has shut down (test with SSH and connection will be refused)).
  4. Click Actions, Image then Create Image
  5. Name the image (select any volumes you have created) and click Create Image.
  6. You can follow the progress o the snapshot creation under the Snapshots menu.AWS Upgrade
  7. When the volumes have been snapshoted you can stop the instance.
  8. Now you can change the instance type by right clicking on the stopped instance and selecting Instance Settings then Change Instance Type  AWS Upgrade
  9. Choose the new desired instance type and click apply (double check your cost estimates) AWS Upgrade
  10. Your can now start your instance.
  11. After a few minutes, you can log in to your server and confirm the cores/memory have increased.AWS Upgrade
  12. Note: Your servers CPU credits will be reset and your server will start with lower CPU credits. Depending on your server you will receive credits every hour up to your maximum cap.AWS Upgrade
  13. Note: Don’t forget to configure your software to use more RAM or Cores if required.

Misc

Confirm processors via the command line:

grep ^processor /proc/cpuinfo | wc -l 
>2

My NGINX configuration changes.

worker_processes 2;
worker_cpu_affinity auto;
...
events {
	worker_connections 4096;
	...
}
...

Find the open file limit of your system (for NGINX worker_connections maximum)

ulimit -n

That’s all for now.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.0 DRAFT Initial Post

Filed Under: Advice, AWS, Cloud, Computer, Server, Ubuntu, VM Tagged With: AWS, instance, upgrade

Connecting to an AWS EC2 Ubuntu instance with Cloud 9 IDE as user ubuntu and root

September 1, 2016 by Simon Fearby

Recently I setup an Amazon EC2 Ubuntu Server instance and wanted to connect it to the awesome Cloud 9 IDE. I was sick of interacting with a server through terminal windows.

Use this link and get $19 free credit with Cloud 9: https://c9.io/c/DLtakOtNcba

c9io15-004

Cloud 9 IDE (sample screenshot)

C9 IDE

Previously I was using Digital Ocean (my Digital Ocean setup guide here) and this was simple, you get a VM and you have a root account and you do what you want.  Amazon AWS however, have extra layers of security that prevent logging in as root via SSH and that can be a pain with Cloud 9 as your workspace tree is restricted to the ~/ (home) folder.

Below are the steps you need to connect to an AWS instance with user “ubuntu” and “root” with Cloud 9.

Connecting to an AWS instance with Cloud 9 as user “ubuntu”

1. Purchase and set-up your AWS instance (my guide here).

2. You need to be able to login to your AWS server from a terminal prompt (from OSX).  This may include opening port 22 the AWS Security Group panel. Info on SSH logins here.

ssh -i ~/.ssh/yourawsicskeypair.pem [email protected]

3. On your AWS server (from step 2) Install NodeJS.

You will know node is installed if you get a version returned when typing the following bash command.

node -v

tip: If node is not installed you can run the Cloud 9 pre-requisites script (that includes node).

curl -L https://raw.githubusercontent.com/c9/install/master/install.sh | bash

4. Ensure you have created SSH key on Cloud 9 (guide here).

5. Copy your Cloud 9 SSH key to the clipboard.

6. On your AWS server (in step 2) edit the ~/.ssh/authorized_keys file and paste in the Cloud 9 SSH key (after you AWS key pair that was added from the setup of AWS) to a new line and save the file.

7. Log in to Cloud 9 and click create Workspace then Remote SSH Workspace.

  • Name your workspace (all lowercase and no spaces).
  • Username: ubuntu
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be ~/

c9io15-000

8. Click Create Workspace

c9io15-002

9. If all goes well you will have a prompt to install the prerequisites.

c9io15-001

If this fails check out the Cloud 9 guide here.

Troubleshooting: I had errors like “Project directory does not exist or is not writable and “Unable to change File System Path in SSH Workspace” because I was trying to set the workspace path as “/” (this is not possible on AWS with the “ubuntu” account.

10. Now you should have a web-based IDE that allows you to browse your server, create and edit files, run termials instances that will reconnect if your net connection or browser tab drops out (you can even go to a different machine and continue with your session).

c9io15-003

Connecting to an AWS instance with Cloud 9 as user “root

Connecting to your server as the “ubuntu” server is fine if you just need to work in your “ubuntu” home folder.  As soon as you want to start changing other settings outside of your home folder you are stuck.  Granting “ubuntu” higher privileges server wide is a bad idea so here is how you can enable “root” login via SSH access.

WARNING: Logging in as ROOT IS BAD, you should only allow Root Login for short periods and it is advisable to remove root login abilities as soon as you do not need them or in production.

Having root access while developing or building a new server saves me bucket loads of time so lets allow it.

1. Follow step 1 to 5 in the steps above (setup AWS, ssh access via terminal, install node, create cloud 9 ssh key, copy the cloud 9 ssh key to the clipboard).

2. SSH to your AWS server and edit the following file:

sudo nano /etc/ssh/sshd_config
# -- Make the following changes
# PermitRootLogin without-password
PermitRootLogin = yes

Save.

3. Backup your root authorised keys file

sudo cp /root/.ssh/authorized_keys /root/.ssh/authorized_keys.bak

4. Edit the root authorized_keys file and paste in your Cloud 9 SSH Key.

c9io15-005

5. Now you can create a Cloud 9 Connection to your server with root

  • Name your workspace (all lowercase and no spaces).
  • Username: root
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be /

c9io15-007

tip:  If you have not added you SSH key correctly you will receive this error when connecting.

c9io15-006

6. You should now be able to connect to AWS ec2 instances with Cloud 9 as root and configure/do anything you want without switching to shell windows.

c9io15-009

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Enjoy

If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.6 security

Filed Under: Cloud, Domain, Hosting, Linux, NodeJS, Security, ssl Tagged With: AWS, c9, cloid, ssh, terminal

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

August 16, 2016 by Simon Fearby

This blog lists the actions I went through to setup an AWS EC2 Ubuntu Server and add the usual applications. This follows on from my guide to setup a Digital Ocean server server guide and Vultr setup guide.

Variable Pricing

Amazon Web Services give developers 12 months of free access to a range of products. Amazon don’ t have flat rate fees like Digital Ocean instead, AWS grant you a minimum level of CPU credits for each server type. A “t2.micro” (1CPU/1GB ram $0.013c/hour server) gets 6 CPU credits an hour.  That is enough for a CPU to run at 10% all the time, you can bank up to 144 CPU credits that you can use when the application spikes.  The “t2.micro” is the free tier server (costs nothing for 12 months), but when the trial runs our that’s $9.50 a month.  The next server up is a “t2.small” (1 CPU, 2GB ram) and you get 12 CCPUredits and hour and can bank 288, that enough for 20% CPU usage all the time.  

The “t2.medium” server (2 CPU’s, 4GB Ram), allows 40% CPU usage credits, 24 CPU credits an hour with 576 bankable.  That server costs $0.052c and hour and is $38 a month. Don’t forget to cache content. 

I used about 40 CPU credits generating a 4096bit secure prime Diffie-Hellman key for an SSL Certificate on my t2.micro server. More Information on AWS Instance pricing here and here.

Creating an AWS Account (Free Trial)

The signup process is simple.

  1. Create a free AWS account.
  2. Enter your CC details (for any non-free services) and submit your account.

It will take 2 days or more for Amazon to manually approve your account.  When you have been approved, navigate to https://console.aws.amazon.com login and set your region in the top right next to your name (in my case I will go with Australia ‘ap-southeast-2‘).

My console home is now: https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-2#LaunchInstanceWizard

Create a Server

You can follow the prompts to set up a free tier EC2 Ubuntu server here.

1. Choose Ubuntu EC2

2. Choose Instance Type: t2-micro (1x CPU, 1GB Ram)

3. Configure Instance: 1

4. Add Storage: /dev/sda1, 8GB+, 10-3000 IOPS

5. Tag Instance: Your own role specific tags

6. Configure Security Group: Default firewall rules.

7. Review

Tip: Create a 25GB volume (instead of 8GB) or you will need to add an extra volume mount it with the following commands.

sudo nano /etc/fstab
(append the following line)
/dev/xvdf    /mydata    ext4    defaults,nofail    0    2
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /mydata
mount -a
cd /mydata
ls -al

Part of theEC2 server setup was to save a .PEM file to your SSH folder on your local PC ( ~/.ssh/mysererkeypair.pem).

You will need to secure the file:

chmod 400 ~/.ssh/mysererkeypair.pem

Before we connect to the server we need to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

DNS

You will need to assign a status IP to your server (apparently the public IP is not static). Here is a good read on connecting a domain name to the IP and assigning an elastic IP to your server.  Once you have assigned an elastic IP you can point your domain to your instance.

AWS IP

Installing the Amazon Command Line Interface utils on your local PC

This is required to see your servers console screen and then connect via SSH.

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
/usr/local/bin/aws --version

You now need to configure your AWS CLI, first generate Access Keys here. While you are there setup Multi-Factor Authentication with Google Authenticator.

aws configure
> AWS Access Key ID: {You AWS API Key}
> AWS Secret Access Key: {You AWS Secret Accss Key}
> Default region name: ap-southeast-2 
> Default output format: text

Once you have configured your CLI you can connect and review your Ubuntu console output (the instance ID can be found in your EC2 server list).

aws ec2 get-console-output --instance-id i-123456789

Now you can hopefully connect to your server and accept any certificates to finish the connection.

ssh -i ~/.ssh/myserverpair.pem [email protected]

AWS Console

Success, I can now access my AWS Instance.

Setting the Time and Daylight Savings.

Check your time.

sudo hwclock --show
> Fri 21 Oct 2016 11:58:44 PM AEDT  -0.814403 seconds

My Daylight savings have not kicked in.

Install ntp service

sudo apt install ntp

Set your Timezone

dpkg-reconfigure tzdata

Go to http://www.pool.ntp.org/zone/au and find my NTP server (or go here if you are outside Australia)

server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org
server 3.au.pool.ntp.org

Add the NTP servers to “/etc/ntp.conf” and restart he NTP service.

sudo service ntp restart

Now check your time again and you should have the right time.

sudo hwclock --show
> Fri 21 Oct 2016 11:07:38 PM AEDT  -0.966273 seconds

🙂

Installing NGINX

I am going to be installing the latest v1.11.1 mainline development (non-legacy version). Beware of bugs and breaking changes here.

sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

NGINX is now installed. Try and get to your domain via port 80 (if it fails to load, check your firewall).

Installing NodeJS

Here is how you can install the latest NGINX (development build), beware of bugs and frequent changes. Read the API docs here.

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

NodeJS is installed.

Installing MySQL

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
sudo mysql_install_db
sudo mysql_secure_installation
service mysql status

Install PHP 7.x and PHP7.0-FPM

I am going to install PHP 7 due to the speed improvements over 5.x.  Below were the commands I entered to install PHP (thanks to this guide)

sudo add-apt-repository ppa:ondrej/php
sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm
sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

NGINX Configuration

NGINX can be a bit tricky to setup for newbies and your configuration will certainly be different but here is mine (so far):

File: /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /var/run/nginx.pid;
events {
        worker_connections 1024;
        multi_accept on;
}
http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;
        client_body_buffer_size      128k;
        client_max_body_size         10m;
        client_header_buffer_size    1k;
        large_client_header_buffers  4 4k;
        output_buffers               1 32k;
        postpone_output              1460;

        proxy_headers_hash_max_size 2048;
        proxy_headers_hash_bucket_size 512;

        client_header_timeout  1m;
        client_body_timeout    1m;
        send_timeout           1m;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        # ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        gzip on;
	gzip_disable "msie6";
	gzip_vary on;
	gzip_proxied any;
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_min_length 256;
	gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6

        access_log /var/log/nginx/myservername.com.log;

        root /usr/share/nginx/www;
        index index.php index.html index.htm;

        server_name www.myservername.com myservername.com localhost;

        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking

        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;


        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;

        # This is handled with the header above.
        #rewrite ^/(.*) https://myservername.com/$1 permanent;

        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }

        fastcgi_param PHP_VALUE "memory_limit = 512M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
                try_files $uri =404;

                # include snippets/fastcgi-php.conf;

                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }

        # deny access to .htaccess files, if Apache's document root
        #location ~ /.ht {
        #       deny all;
        #}
}

Test and Reload NGINX Config

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Don’t forget to test PHP with a script that calls ‘phpinfo()’.

Install PhpMyAdmin

Here is how you can install the latest branch of phpmyadmin into NGINX (no apache)

cd /usr/share/nginx/www
mkdir my/secure/secure/folder
cd my/secure/secure/folder
sudo apt-get install git
git clone --depth=1 --branch=STABLE https://github.com/phpmyadmin/phpmyadmin.git

If you need to import databases into MySQL you will need to enable file uploads in PHP and set file upload limits.  Review this guide to enable uploads in phpMyAdmin.  Also if your database is large you may also need to change the “client_max_body_size” settings on nginx.conf ( see guide here ). Don’t forget to disable uploads and reduce size limits in NGINX and PHP when you have uploaded databases.

Note: phpMyAdmin can be a pain to install so don’t be afrait of using an alternative management gui.  Here is a good list of MySQL management interfaces. Also check your OS App store for native mysql database management clients.

Install an FTP Server

Follow this guide here then..

sudo nano /etc/vsftpd.conf
write_enable=YES
sudo service vsftpd restart

Install: oracle-java8 (using this guide)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
cd /usr/lib/jvm/java-8-oracle
ls -al
sudo nano /etc/environment
> append "JAVA_HOME="/usr/lib/jvm/java-8-oracle""
echo $JAVA_HOME
sudo update-alternatives --config java

Install: ncdu – Interactive tree based folder usage utility

sudo apt-get install ncdu
sudo ncdu /

Install: pydf – better quick disk check tool

sudo apt-get install pydf
sudo pydf

Install: rcconf – display startup processes (handy when confirming pm2 was running).

sudo apt-get install rcconf
sudo rcconf

I started “php7.0-fpm” as it was not starting on boot.

I had an issue where PM2 was not starting up at server reboot and reporting to https://app.keymetrics.io.  I ended up repairing the /etc/init.d/pm2-init.sh as mentioned here.

sudo nano /etc/init.d/pm2-init.sh
# edit start function to look like this
...
start() {
    echo "Starting $NAME"
    export PM2_HOME="/home/ubuntu/.pm2" # <== add this line
    super $PM2 resurrect#
}
...

Install IpTraf – Network Packet Monitor

sudo apt-get install iptraf
sudo iptraf

Install JQ– JSON Command Line Utility

sudo apt-get install jq
# Downlaod and display json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Install Ruby – Below commands a bit out of order due to some command not working for unknown reasons.

sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
sudo gem install bundler
sudo git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
sudo git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
sudo git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
sudo echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
sudo echo 'eval "$(rbenv init -)"' >> ~/.bashrc
sudo exec $SHELL
sudo rbenv install 2.2.3
sudo rbenv global 2.2.3
sudo rbenv rehash
ruby -y
sudo apt-get install ruby-dev

Install Twitter CLI – https://github.com/sferik/t

sudo gem install t
sudo t authorize# follow the prompts

Mutt (send mail by command line utility)

Help site: https://wiki.ubuntu.com/Mutt

sudo nano /etc/hosts
127.0.0.1 localhost localhost.localdomain xxx.xxx.xxx.xxx yourdomain.com yourdomain

Configuration:

sudo apt-get install mutt
[configuration none]
sudo nano /etc/postfix/main.cf
[setup a]

Configure postfix guide here

Extend the History commands history

I love the history command and here is how you can expand it’s hsitory and ignore duplicates.

HISTSIZE=10000
HISTCONTROL=ignoredups

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

Cont…

Next: I will add an SSL cert, lock down the server and setup Node Proxies.

If this guide was helpful please consider donating a few dollars to keep me caffeinated.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.61 added vultr guide link

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Security Tagged With: AWS, server

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) IoT (9) LetsEncrypt (7) Linux (21) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) Performance (6) PHP (13) Scalability (12) Scalable (14) Security (45) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (45) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT