DevOps

Yarn

Yarn is a package management tool developed by Facebook.

Advantages of Yarn include:

  • It takes packages from both NpmJS or Bower registries.

  • Security:
    Yarn uses checksums to verify the integrity of every installed package before its code is executed.

  • Offline Mode:
    If you've installed a package before, you can install it again without any internet connection. Packages installed using Yarn with yarn add packagename are stored on your disk so that during the next install, this package will be used instead of sending a HTTP request to download the package again from the registry.

  • Parallel Installation
    Whenever npm or Yarn needs to install a package, it carries out a series of tasks. In npm, these tasks are executed per package and sequentially, meaning it will wait for a package to be fully installed before moving on to the next. Yarn executes these tasks in parallel, increasing performance.

  • Lockfiles
    The same dependencies will be installed the same exact way across every machine regardless of install order. Lockfiles lock the installed dependencies to a specific version, and ensure that every install results in the exact same file structure in node_modules across all machines.
    After every install, upgrade or removal, yarn updates a yarn.lock file which keeps track of the exact package version installed in node_modules directory. This lockfile should be added to version control.

npm vs yarn commands

#Starting a new project
npm init === yarn init

#Installing all the dependencies of project
npm install === yarn or yarn install

#Adding a dependency
npm install [package] === yarn add [package] #The  package is saved to your package.json immediately.
npm install  [package]@[version] === yarn add [package]@[version]
npm install [package]@[tag] === yarn add [package]@[tag]

#Add a dev dependency
npm install [package] --save-dev === yarn add [package] --dev

#Upgrading a dependency
npm update [package] === yarn upgrade [package]
npm update [package]@[version] === yarn upgrade [package]@[version]
npm update [package]@[tag] === yarn upgrade [package]@[tag]

#Removing a dependency
npm uninstall [package] === yarn remove [package]

#View registry information
npm view [package] === yarn info [package]

#List installed packages
npm list === yarn list
npm list --depth === yarn list --depth=0

#Install packages globally
npm install -g [package] === yarn global addb [package]

  					

Gulp

Gulp is a streaming build system which allows it to pipe and pass around data being manipulated or used by its plugins. The Gulp API contains 4 top level Functions:

gulp.task: defines your tasks.
gulp.src: points to the files that are to be used. It uses .pipe for chaining output into other plugins
gulp.dest: points to the output folder we want to write files to.
gulp.watch: tells Gulp to watch for changes to any of the defined files and re-runs the task specified.

Gulp Plumber is a global listener to a tak and displays meaningful error messages. With tasks that have multiple pipes, we only need to call plumber once. Below is an example of a Gulp file:

var gulp = require('gulp');
var sass = require('gulp-sass');
var autoprefixer = require('gulp-autoprefixer');
var plumber = require('gulp-plumber');


// Styles
gulp.task('styles', function() {
gulp.src('sass/styles.scss')

.pipe(plumber({ errorHandler: function(err) {
	notify.onError({
		title: "Gulp error in " + err.plugin,
		message:  err.toString()
	})(err);
	.pipe(sass())
	.pipe(autoprefixer('last 1 version', '> 1%', 'ie 8', 'ie 7'))
	.pipe(gulp.dest('css'));
});

// Watch the sass files
gulp.task('watch', function() {
	gulp.watch('sass/*.scss', ['styles']);
});

gulp.task('default', ['styles, watch']);

					

Below is an example of one have used for this website

'use strict';

var gulp            = require('gulp');
var autoprefixer    = require('gulp-autoprefixer');
var csso            = require('gulp-csso');
var rename          = require('gulp-rename');
var watch           = require('gulp-watch');
var runSequence     = require('run-sequence');

// Set the browser that you want to support
const AUTOPREFIXER_BROWSERS = [
  'ie >= 10',
  'ie_mob >= 10',
  'ff >= 30',
  'chrome >= 34',
  'safari >= 7',
  'opera >= 23',
  'ios >= 7',
  'android >= 4.4',
  'bb >= 10'
];


// Copy the html files to dist
gulp.task('copy_html', function () {
   return gulp.src('./*html')
    .pipe(gulp.dest('./dist'))
});

// Copy images to dist
gulp.task('copy_imgs', function () {
   return gulp.src('images/*')
    .pipe(gulp.dest('./dist/images'))
});


// Auto prefix and minify the CSS
gulp.task('minify_css', function () {
   return gulp.src('./style/main.css')
    // Auto-prefix css styles for cross browser compatibility
    .pipe(autoprefixer({browsers: AUTOPREFIXER_BROWSERS}))
    // Minify the file
    .pipe(csso())
    .pipe(rename('main.min.css'))
    .pipe(gulp.dest('./dist/style'))
});

gulp.task('watch', function() {
    gulp.watch('style/*.css', ['minify_css']);
    gulp.watch('*html', ['copy_html'])
});

gulp.task('default',  function () {
  runSequence(
      'minify_css',
      'copy_html',
      'copy_imgs',
      'watch'
  );
});

                    

Some of the most common taks Gulp is used for are the following:

  • Compressing image files
  • Eliminating debugger and console statements from scripts
  • Minifying, concatenating, and cleaning up CSS and JavaScript
  • Linting code for errors
  • Compiling LESS files
  • Running unit tests
  • Sending updates to a production server
  • Updating databases

Resources

Automate Your Tasks Easily with Gulp.js
BrowserSync for Faster Development
Minifying your CSS, JS & HTML files using Gulp

GIT Work Flows

Git offers a lot of flexibility and there is no standardized process on how to interact with Git. When working with a team on a Git managed project, it’s important to make sure the team is on the “same page” with regards to which Git workflow should be used. Below are several publicized Git workflows that may be a good fit for your team.

Git Feature Branch Workflow

The core idea with this workflow is that all development should take place in a dedicated branch instead of the master branch.

It allows the usage of pull requests which allows developers to comment on each other’s work before it gets integrated into the main project.

How it works

Instead of working and committing directly on their local master branch, developers create a new branch every time they start a work on a new item. The branches should have descriptive names to allow easy identification e.g. issue#865.

Once work has been completed on the branch, the branch can be pushed to the central repository where the code can be reviewed by other developers without it touching the 'official' code in the master branch.

Simple Walk through example:

Create a new branch
Use a separate branch for each work item. Run the below to create and switch to (checkout to) the newly created branch:

git checkout -b new-feature
					

Update, add, commit
On this branch, edit, stage and commit changes as you would usually do:

git add 
git commit
						

Push feature branch to remote
The below command pushes new-feature to the central repository (origin), and the -u flag adds it as a remote tracking branch. After setting up the tracking branch, git push can be invoked without any parameters to automatically push the new-feature branch to the central repository.

git push -u origin new-feature
						

Feedback
To get feedback on the new feature branch, create a pull request in Bitbucket or GitHub. This allows teammates to comment on the branch pushed commit. Resolve their comments locally, then commit and push the suggested changes. Your updates appear in the pull request.

Merge Pull request
When your pull request is approved and conflict-free, you can add your code to the master branch. Merge from the pull request in Bitbucket or GitHub. This can also be manually done by running the following:

git checkout master
git pull
git pull origin new-feature
git push

						

Gitflow Workflow

The Gitflow Workflow defines a strict branching model designed around the project release. It assigns very specific roles to different branches and defines how and when they should interact.

There is a git-flow toolset available which integrates with Git to provide specialized Gitflow Git command line tool extensions.

Gitflow Branches

Instead of a single master branch, two branches are used to record the history of a project. The master branch stores the official release history, and the develop branch serves as an integration branch.

Each new feature of a project should have its own branch and use develop as their parent branch; and should never interact directly with master. When you’ve finished with the development work on the feature, the next step is to merge the feature_branch into develop. Once the develop branch has acquired enough features for a release, or a predetermined release date is approaching, you fork a release branch off of develop.

Once created, no new features can be added to the release branch, only bug fixes, documentation and other release task related items. Once it is ready to ship, the release branch is merged into master and tagged with a version number and also be merged back into develop which may have progressed since the release was initiated. Once this has been done, the release branch is then deleted.

Maintenance Branches

Maintenance or “hotfix” branches branch off of the master branch rather than develop. As soon as a fix is complete it should be merged into both master and develop (or the current release branch). You can think of maintenance branches as ad hoc release branches that work directly with master.

Below is the overall flow of Gitflow:

  • A develop branch is created from master
  • A release branch is created from develop
  • Feature branches are created from develop
  • When a feature is complete it is merged into the develop branch
  • When the release branch is done it is merged into develop and master
  • If an issue in master is detected a hotfix branch is created from master
  • Once the hotfix is complete it is merged to both develop and master


Gitflow workflow

Forking Workflow

This workflow is most often seen in public open source projects. It gives the developer their own server-side (remote) repository which means two Git repositories: a private local one and a public server side one. While you can call these remotes anything you want, a common convention is to use origin as the remote for your forked repository (this will be created automatically when you run git clone) and upstream for the official repository.

Developers push to their own server-side repositories, and only the project maintainer can push to the official repository. This allows the maintainer to accept commits from any developer without giving them write access to the official codebase.

When a developer wants to start working on a project, they fork (which is essentially just a git clone command executed on a server copy of a projects repo.) the official repository to create a copy of it on the server. Popular Git hosting services like GitHub and Bitbucket, offer repo forking features that automate this step. This new copy serves as their personal public repository — no other developers are allowed to push to it, but they can pull changes from it.

The developer performs a git clone to get a copy of it onto their local machine. When they're ready to publish a local commit, they push the commit to their own public repository—not the official one. Then, they file a pull request with the main repository, which lets the project maintainer know that an update is ready to be integrated. The pull request also serves as a convenient discussion thread if there are issues with the contributed code.

To integrate the feature into the official codebase, the maintainer pulls the contributor’s changes into their local repository, checks to make sure it doesn’t break the project, merges it into their local master branch, then pushes the master branch to the official repository on the server. The contribution is now part of the project, and other developers should pull from the official repository to synchronize their local repositories.

The Forking Workflow helps a maintainer of a project open up the repository to contributions from any developer without having to manually manage authorization settings for each individual contributor. This gives the maintainer more of a "pull" style workflow.

Most commonly used in open-source projects, the Forking Workflow can also be applied to private business workflows to give more authoritative control over what is merged into a release. This can be useful in teams that have Deploy Managers or strict release cycles.

Resources

Comparing Workflows

Bitbucket and Jenkins Pipeline Integration

A Pipeline in Jenkins is a way of defining some Jenkins steps using code, and automate the process of deploying software. The list of jobs which are to be performed in the pipeline is configured in the Jenkinsfile, which is a text file that contains the definition of a Jenkins Pipeline and is checked into source control. The file can be located on the Jenkins server itself or at the root of a linked git/Bitbucket repository.

The Bitbucket plugin Webhook to Jenkins for Bitbucket can be used to Integrate Bitbucket Server and Jenkins and allows you to configure a hook on a project or repository level. This allows BitBucket to notify Jenkins of any changes in the code base and triggers the associated Jenkins Pipeline.

Resources

Webhook to Jenkins for Bitbucket Continuous integration workflow

Docker

Docker is an open-source project based on Linux containers .You can think of Docker as an operating system (or runtime) for Docker containers. It allows anyone to package an application on their laptop, which in turn can run unmodified on any public cloud, private cloud, or even bare metal. The mantra is: "build once, run anywhere." The Docker Engine is installed on each server you want to run containers on and provides a simple set of commands you can use to build, start, or stop Docker Containers.

Docker Images

Docker images are read-only templates that describe a Docker Container. They include specific instructions written in a Dockerfile that defines the application and its dependencies. You can think of Docker images as a snapshot of an application at a certain time. You will get images when you run the docker build command.

Docker Containers

Docker containers are built off Docker images and wraps an application’s software into an invisible box with everything the application needs to run. They include the operating system, application code, runtime, system tools, system libraries, etc. Since images are read-only, Docker adds a read-write file system over the read-only file system of the image to create a container.

Docker containers are very lightweight and fast. Since containers are just sandboxed environments running on the kernel, they take up fewer resources. You can create and run a Docker container in seconds, compared to VMs which might take longer because they have to boot up a full virtual operating system every time.

You are able to connect multiple Docker Containers together. For example, you might have your Postgres database running in one container and your Redis server in another while your Node.js app is in another. With Docker, it’s become easier to link these containers together to create your application, making it easy to scale or update components independently in the future.

You can run a Docker Container with docker start.

Docker Registries

A Docker Registry is a place for you to store and distribute Docker images.

Docker's own registry is called Docker Hub which is a sort of 'app store for Docker images'. It has has tens of thousands of public images created by the community that are readily available for use. It’s incredibly easy to search for images that meet your needs, ready to pull down and use with little-to-no modification.

Docker Compose

Docker Compose is a tool that allows you to build and start multiple Docker Images at once. Instead of running the same multiple commands every time you want to start your application, you can do them all in one command — once you provide a specific configuration.

How to get code into containers

Using Shared Volumes

Docker allows for mounting local directories into containers using the shared volumes feature. Just use the -v switch to specify the local directory path that you wish to mount, along with the location where it should be mounted within the running container

This is particularly useful when developing locally, as you can use your favorite editor to work locally, commit code to Git, and pull the latest code from remote branches. Your application will run inside a container, isolating it away from any processes you have running on your development laptop.

Using COPY command

You can use the COPY command within a Dockerfile to copy files from the local filesystem into a specific directory within the container. This technique is recommended for building production-ready Docker images.

Resources

A Beginner-Friendly Introduction to Containers, VMs and Docker
Docker Development WorkFlow — a guide with Flask and Postgres
An Introduction to Docker Through Story
What is Docker?
How to Get Code into a Docker Container

Cloud Computing

Computing Service Models

IaaS (Infrastructure as a Service)

With IaaS, pre-configured hardware resources are provided to users through a virtual interface. Implementation of applications, and even the operating system, is left up to the customer. It simply enables access to the infrastructure needed to power or support that software.

Platform as a Service (PaaS)

PaaS provides a platform and environment to allow developers to build applications and services over the internet. PaaS offerings typically include a base operating system and a suite of applications and development tools. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage.

PaaS solutions provide a platform that allows customers to develop, launch, and manage apps in a way that is much simpler than having to build and maintain the infrastructure.

SaaS (Software as a Service)

Sometimes referred to as ‘on-demand software’, SaaS is a software licensing and delivery model where a fully functional and complete software product is delivered to users over the web on a subscription basis.

SaaS offerings are typically accessed by end users through a web browser (making the user’s operating system largely irrelevant) and can be billed based on consumption or, more simply, with a flat monthly charge.

Popular products like Office365 and Salesforce have thrust SaaS offerings to the forefront of the workplace and are used by thousands of businesses every day.

Amazon Web Services

AWS provides IT infrastructure and other services over the internet. It provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing.

AWS Products

AWS provides building blocks that you can assemble quickly to support any workload. With AWS, you’ll find a complete set of highly available services that are designed to work together to build scalable applications. The following categories represent the core products of AWS.

Compute and Networking Services

  • Amazon EC2 (Provides virtual servers in the AWS cloud)
  • Amazon VPC (Provides an isolated virtual network for your virtual servers)
  • Elastic Load Balancing (Distributes network traffic across your set of virtual servers)
  • Auto Scaling (Automatically scales your set of virtual servers based on changes in demand)
  • Amazon Route 53 (Routes traffic to your domain name to a resource, such as a virtual server or a load balancer)
  • AWS Lambda (Runs your code on virtual servers from Amazon EC2 in response to events)
  • Amazon ECS (Provides Docker containers on virtual servers from Amazon EC2)

Storage and Content Delivery Services

  • Amazon S3 (Scalable storage in the AWS cloud)
  • CloudFront (A global content delivery network (CDN))
  • Amazon EBS (Network attached storage volumes for your virtual servers)
  • Amazon Glacier (Low-cost archival storage)

Security and Identity Services

  • AWS Identity and Access Management (Manage user access to AWS resources through policies)
  • AWS Directory Service (Manage user access to AWS through your existing Microsoft Active Directory, or a directory you create in the AWS cloud)

Database Services

  • Amazon RDS (Provides managed relational databases)
  • Amazon Redshift (A fast, fully-managed, petabyte - scale data warehouse)
  • Amazon DynamoDB (Provides managed NoSQL databases)
  • Amazon ElastiCache (An in-memory caching service)

Analytics Services

Amazon EMR (Amazon EMR) uses Hadoop, an open source framework, to manage and process data. Hadoop uses the MapReduce engine to distribute processing using a cluster.

  • Amazon EMR (You identify the data source, specify the number and type of EC2 instances for the cluster and what software should be on them, and provide a MapReduce program or run interactive queries)
  • AWS Data Pipeline (to regularly move and process data)
  • Amazon Kinesis (real-time processing of streaming data at a massive scale)
  • Amazon ML (use machine learning technology to obtain predictions for their applications using simple APIs. Amazon ML finds patterns in your existing data, creates machine learning models, and then uses those models to process new data and generate predictions)

Application Services

  • Amazon AppStream (Host your streaming application in the AWS cloud and stream the input and output to your users’ devices)
  • Amazon CloudSearch (Add search to your website)
  • Amazon Elastic Transcoder (Convert digital media into the formats required by your users’ devices)
  • Amazon SES (Send email from the cloud)
  • Amazon SNS (Send or receive notifications from the cloud)
  • Amazon SQS (Enable components in your application to store data in a queue to be retrieved other components)
  • Amazon SWF (Coordinate tasks across the components of your application)

Management Tools

  • Amazon CloudWatch (Monitor resources and applications)
  • AWS CloudFormation (Provision your AWS resources using templates)
  • AWS CloudTrail (Track the usage history for your AWS resources by logging AWS API calls)
  • AWS Config (View the current and previous configuration of your AWS resources, and monitor changes to your AWS resources)
  • AWS OpsWorks (Configure and manage the environment for your application, whether in the AWS cloud or your own data center)
  • AWS Service Catalog (Distribute servers, databases, websites, and applications to users using AWS resources)

AWS can be accessed through:

  • AWS Management Console
  • AWS Command Line Interface (AWS CLI)
  • Command Line Tools
  • AWS Software Development Kits (SDK)
  • Query APIs

There is a detailed guide on how to install and use each of these options in the documentation. As you can see it takes a while to get familiar with each tool to get into some sort of workflow.

Keypoints on AWS

  • Elastic pay-per-use infrastructure
  • On demand resources
  • Scalability
  • Global infrastructure
  • Reduced time to market
  • Increased opportunities for innovation
  • Enhanced security

Comparing cloud service and traditional on-premises infrastructure

Imagine a scenario where a development team is tasked to develop a new service for a global enterprise company that currently serves millions of consumers worldwide. The new service must:

  • have the ability to scale to meet peak data demands
  • provide resiliency in the event of a datacenter failure
  • Ensure data is secure and protected
  • Provide in-depth debugging for troubleshooting
  • The project must be delivered quickly
  • The service must be cost-efficient to build and maintain
Below is a comparison of building it on on-premises infrastructure and building it using AWS cloud infrastructure.

Scalability

With on-premises infrastructure, the compute capacity needs to sized to match peak data demands. If the service has a large variable workload, it will leave you with a lot of excess and expensive compute capacity in times of low utilization, which can be seen as wasteful. Also there is the operational costs for supporting a rack of bare-metal servers for a single service.

With a cloud based infrastructure an auto scaling solution you can define seamless auto scaling groups that spin up more instances of the application based on demand. This means that you’re only paying for the compute resources that you use.

Resiliency

To meet basic resiliency criteria, hosting a service within a single datacentre is not an option. For an on-premises solution, this team would a minimum of two servers for local resiliency — replicated in a second data center for geographic redundancy. A load balancing solution also needs to be implemented that automatically redirects traffic between sites in the event of saturation or failure and it needs to be ensured that the mirror sites are continually synchronised with each other.

AWS provide multiple availability zones within each of their 50 regions worldwide with automated failover capabilities that can seamlessly transition AWS services to other zones within the region. The AWS load balancer requires minimal effort to setup and manage the traffic flow.

Security and protection of data

For the on-premises solution, there will be the an ongoing cost of monitoring, identifying and patching security threats. For the cloud based solution, the development team still have to be vigilant but don’t have to worry about patching the underlying infrastructure.

Monitoring and logging

Infrastructure and application services need to be monitored and ideally have access to a dashboard that provides alerts when thresholds are exceeded and offer the ability to easily access and search logs for trouble shooting. This can be very diffucult to set up on the on-premises solution and developers may find themselves searching through log files located on different servers.

Native AWS services such as CloudWatch and CloudTrail make monitoring cloud applications easy with the cloud based solution. Without much setup, the development team can monitor a wide variety of different metrics for each of the deployed services.

Disadvantages of Cloud Computing

Downtime

As the cloud requires an internet connection to work, if your connection is running slowly or is down, it will affect your work as you will not be able to access any of your applications, server or data. If your internet service suffers from frequent outages or slow speeds, cloud computing may not be suitable for your business.

As cloud service providers take care of a number of clients each day, they can become overwhelmed and may even come up against technical outages themselves. Even the most reliable cloud computing service providers suffer server outages now and again.

This is why it is important to choose a reliable cloud services provider, because even if an outage occurs, you can be sure your provider will try and resolve the problem as soon as possible. With the right provider, cloud computing is still much more reliable and consistent than an in-house IT infrastructure.

Security and Privacy

Cloud based solutions are exposed on the public internet and so a more vulnerable target for malicious users and hackers. Nothing on the Internet is completely secure and even the biggest organisations can suffer from serious attacks and security breaches.

Also by using cloud-based services, a company is essentially 'outsourcing' its data so trusting the provider to effectively manage and safeguard their data. It is therefore essential that a trusted, experienced and reliable provider with a proven track record is chosen.

Many cloud experts however believe that trusted cloud data centres, such as Amazon Web, have better security than an in-house data centre, so security is really dependent upon the provider.

Vendor Lock-in

Differences between provider systems can be difficult, and sometimes impossible, to migrate from one provider to another. Not only can it be complex and expensive to reconfigure your applications to meet the requirements of a new host, it can be really painful and cumbersome to transfer huge data from the old provider to the new one and you could also expose your data to additional security and privacy vulnerabilities.

Be careful when you're choosing a cloud computing vendor that you're not going to become a "forever" customer because their applications and/or data formats do not allow easy transfer/conversion of information into other systems. Some vendors deliberately attempt to "lock-in" customers by using proprietary software/hardware, so that it is impossible or very expensive to switch to another cloud vendor.

Costs

For a small to medium size business, cloud computing could be a lot cheaper to have than having in-house servers. However, if your business is very large such as on corporate level, the benefits of cloud computing dwindles as the cost skyrockets. Be sure to analyse the cost for both an in-house server and the cloud before making a decision for your business.

You should look closely at the pricing plans and details for each application, taking into account possible future expansion. For example, the president of a non-profit organization that recently switched to a cloud-based membership application found that when their membership numbers recently exceeded the limits on their contract the price to go to the next tier was nearly double.

If you don't need the most up-to-date versions of software every year, desktop software can be cheaper in the long run. For instance, if you purchase the desktop version of Microsoft Office 2016 and use it for several years, you pay a one-time fee and own the software forever versus having to pay an annual fee for using the cloud-based version, Office 365.

If your business involves transferring large amounts of data, be aware that while transferring data to the cloud (inbound) is free, outbound data transfers over the basic monthly allowance are charged on a per GB basis. If your business requirements will include regularly downloading large amounts of data from your cloud applications or data storage, the additional costs can add up. (See, for example, Microsoft Azure data transfer pricing.)

Resources

Saas, PaaS, and IaaS: Understanding the Three Cloud Computing Service Mode
The Cost of Cloud Computing
Introducing Amazon Web Services (AWS)
Disadvantages of cloud computing

Website / Application Loading Performance

Introduction

Slow running websites and applications result in higher bounce rates (percentage of visitors who enter the site and then leave), fewer return visits and frustrated users.

In a DoubleClick by Google study, it was found that 53% of mobile site visits were abandoned if a page took longer than 3 seconds to load. In the same study, it was found that sites loading within 5 seconds had 70% longer sessions, 35% lower bounce rates than sites taking nearly four times longer at 19 second and publishers whose sites loaded within five seconds earned up to twice as much ad revenue than sites loading within 19 seconds.

Google has also implemented site speed as a ranking signal in its mobile searches so slow loading is detrimental for search engine optimization (SEO).

With web site and application size and functionality now becoming more demanding of network and device resources, coupled with the fact that mobile users now making up the largest portion of internet users (many of who access the web through mobile LTE, 4G, 3G and even 2G networks), optimizing performance should be at the top of the list of priorities when developing web sites and applications.

Below are outlined some techniques we can use to 'grab the low hanging fruits' to increase performance of our web sites and applications.

Minifying Your JavaScript, HTML and CSS Code

Minification removes unnecessary characters from code without changing its validity or functionality (things like comments, whitespace, newlines, and extra parentheses). There are quite a few plugins and apps that can be used. Below are some examples:

gulp-html-minifier is a Gulp plugin that can be used to minify HTML.

gulp-babel-minify is a Gulp plugin that can be used to minify JavaScript.

gulp-clean-css and gulp-cssnano are Gulp plugins that can be used to minify CSS.

Reduce Library Use

Although CSS and JavaScript libraries do their best to minify and compress their download files they can still consume serious bandwidth. If you only need one or two specific things, you can save a lot of download time by replacing those features with single-use functions or CSS rules.

You might not need jQuery is a site where you can find alternatives to jQuery code to get the same effects on your site without the download overhead of jQuery.

CSS is a render blocking resource which means that the browser won't render any processed content until the CSS Object Model (CSSOM) is constructed. Therefore it is important to keep your CSS as 'lean' as possible.

If you are using Bootstrap, it may be worth looking at Flexbox and Grid as alternatives so you don't have the overhead of a CSS framework.

Use Gzip

Gzipping text resources can achieve up to 70% compression (or more for larger files). All modern browsers support Gzip compression for HTTP requests but the server must be configured to deliver the compressed resource when requested.

If this configuration is not possible, like for example your hosting company do not provide that option, then look into ways to add Gzipping to your code.

In this article, the author explains how he used expressjs / compression middleware to decrease his JavaScript and CSS bundles.

Graphical Content

Graphical content can easily account for 60%-85% of a typical website's total bandwidth so by reducing the amount of time images take to load can lead to a significant performance boost.

Remove Unnecessary Images

Consider whether each image is really needed or whether it is needed straight away. Every image removed speeds up your page load time.

Choose Appropriate Image Types

As a rule of thumb, use PNGs for clip art, line drawings or wherever you need transparency, JPGs for photographs and GIFs for animation.

Remove Image Metadata

For most website images, metadata is unimportant so can be stripped out. Some image editors have functionality to view and edit metadata. There are also online tools such as VerExif.

Resize Images

Always size images on their intended use. You should not rely on the browser to resize them for rendering. Another effective technique is to reduce file size for images of all kinds is simple cropping to show only what's important.

Reduce image quality

In most cases, you can reduce the quality of a JPG, and thus the file size, without suffering any visible quality difference. Experiment with lower-quality JPGs to see how low you can go before seeing a difference, and then use the smallest one that retains the photo's clarity.

Compress Images

PNG and JPG images can be squashed down even more using a compression tool, which reduces file size without affecting either image dimensions or visual quality. One such tool is TinyPing, a free online compression tool.

Reducing HTTP Requests

In addition to reducing download size, we can also reduce download frequency by combining resources.

Combine Text Resources

Web pages may have multiple JavaScript and Stylesheet files. Each file requires its own HTTP request so by combining files, we can speed up page loading.

To combine JavaScript and Stylesheet files, things you could use include Gulp with the plugin gulp-concat, Browserify, and Webpack.

Please note that CSS doesn't throw an error when a previously-defined property is reset by a more recent rule, so combining CSS files could cause issues. To overcome this,before concatenating, look for conflicting rules and determine whether one should always supersede the other, or if one should use more specific selectors to be applied properly.

Combine Graphical Resources

This technique is most commonly used with small images such as icons.

You can combine small images into one file then use CSS background positioning to display the desired part of the image (sprite) on the desired place on the page.

HTTP Caching

When a Browser first loads a web page, it stores the resources in the HTTP Cache. When the browser goes to the page on the next visit, it can look in the cache for those resources that were stored from the previous visit and retrieve them from disk, which is often faster than downloading from the network.

Browsers may have multiple caches that differ in how they acquire, store, and retain content. You can read about how these caches vary in this excellent article, A Tale of Four Caches.

Caching works by categorizing certain page resources in terms of how frequently or infrequently they change. It is important to determine which types of content are more static or more dynamic.

You can find a great discussion of caching patterns, options, and potential pitfalls in Caching Best Practices and Max-age Gotchas.

Tools To Measure Performance

WebPageTest is an excellent tool for testing on real mobile devices and envisioning a more real-world setup. You give it a URL, it loads the page on a real Moto G4 device with a slow 3G connection, and then gives you a detailed report on the page's load performance.

Chrome DevTools (built into Google Chrome) provides in-depth analysis on everything that happens while your page loads or runs.

Lighthouse is available in Chrome DevTools, as a Chrome Extension, as a Node.js module, and within WebPageTest. You give it a URL, it simulates a mid-range device with a slow 3G connection, runs a series of audits on the page, and then gives you a report on load performance, as well as suggestions on how to improve.

Lighthouse and Chrome DevTools are primarily for your local iteration as you build your site.

Resources

Google Website Website optimization

REST

REST (Representational State Transfer) is an architectural style that uses simple HTTP calls for inter-machine (Client and Server) communication. Clients and servers can interact in complex ways without the client knowing anything beforehand about the server and the resources it hosts. The key constraint is that the server and client must both agree on the media used.

RESTful API Design

The API (Application Programming Interface) is an interface through which developers interact with data from an application. If an API is not well designed and so confusing or not verbose, then developers may stop using it. Even if you are not writing APIs for other developers and products, it is always good for your application to have well designed APIs.

The following are the most important terms related to REST APIs:

  • Resource: An object or representation of something, which has some associated data with it and there can be set of methods to operate on it. E.g. Companies, Animals, schools and employees are resources and delete, add, update are the operations to be performed on these resources.
  • Collections: set of resources, e.g Companies is the collection of Company resource.
  • URL (Uniform Resource Locator) a path through which a resource can be located and some actions can be performed on it.

The following are the most important HTTP methods (verbs):

  • GET: requests data from the resource and should not produce any side effect.
  • POST: method requests the server to create a resource in the database. It is non-idempotent which means multiple requests will have different effects
  • PUT: method requests the server to update resource or create the resource, if it doesn’t exist. It is idempotent which means multiple requests will have the same effects.
  • DELETE: method requests that the resources, or its instance, should be removed from the database.
  • When designing an API, URL paths should contain the plural form of resources (nouns, not actions or verbs) and the HTTP method should define the kind of action to be performed on the resource. If we want to access one instance of the resource, we can always pass the id in the URL.

    For example, say we had an application that has the resource Company:

    • method GET path /companies should get the list of all companies
    • method GET path /companies/34 should get the detail of company 34
    • method DELETE path /companies/34 should delete company 34


    If we have resources under a resource, e.g Employees of a Company, then few of the sample API endpoints would be:
    • method GET path /companies/3/employees should get the list of all employees from company 3
    • method GET path /companies/3/employees/45 should get the details of employee 45, which belongs to company 3
    • method DELETE path /companies/3/employees/45 should delete employee 45, which belongs to company 3
    • method POST path /companies should create a new company and return the details of the new company created

    When the client raises a request to the server through an API, the client should know the feedback, whether it failed, passed or the request was wrong. HTTP status codes are standardized codes with various meanings. The server should always return the right status code.

    2xx (Success category):
    200 Ok: The standard HTTP response representing success for GET, PUT or POST.
    201 Created: This status code should be returned whenever the new instance is created.
    204 No Content: represents the request is successfully processed, but has not returned any content. DELETE can be a good example of this.

    3xx (Redirection Category):
    304 Not Modified indicates that the client has the response already in its cache. And hence there is no need to transfer the same data again.

    4xx (Client Error Category):
    400 Bad Request indicates that the request by the client was not processed, as the server could not understand what the client is asking for.
    401 Unauthorized indicates that the client is not allowed to access resources, and should re-request with the required credentials.
    403 Forbidden indicates that the request is valid and the client is authenticated, but the client is not allowed access the page or resource for any reason.
    404 Not Found indicates that the requested resource is not available now.
    410 Gone indicates that the requested resource is no longer available which has been intentionally moved.

    5xx (Server Error Category):
    500 Internal Server Error indicates that the request is valid, but the server is totally confused.
    503 Service Unavailable indicates that the server is down or unavailable to receive and process the request.

    Resources

    RESTful API Designing guidelines — The best practices