Learn CSS | HTML5 | JavaScript | Wordpress | Tutorials-Web Development | Reference | Books and More



React with TypeScript: Best Practices

Jan 16, 2020


React with TypeScript: Best Practices

React and TypeScript are two awesome technologies used by a lot of developers these days. Knowing how to do things can get tricky, and sometimes it's hard to find the right answer. Not to worry. We've put together the best practices along with examples to clarify any doubts you may have.

Let's dive in!

How React and TypeScript Work Together

Before we begin, let's revisit how React and TypeScript work together. React is a "JavaScript library for building user interfaces", while TypeScript is a "typed superset of JavaScript that compiles to plain JavaScript." By using them together, we essentially build our UIs using a typed version of JavaScript.

The reason you might use them together would be to get the benefits of a statically typed language (TypeScript) for your UI. This means more safety and fewer bugs shipping to the front end.

Does TypeScript Compile My React Code?

A common question that’s always good to review is whether TypeScript compiles your React code. The way TypeScript works is similar to this interaction:

TS: "Hey, is this all your UI code?"
React: "Yup!"
TS: "Cool! I'm going to compile it and make sure you didn't miss anything."
React: "Sounds good to me!"

So the answer is yes, it does! But later, when we cover the tsconfig.json settings, most of the time you'll want to use "noEmit": true. What this means is TypeScript will not emit JavaScript out after compilation. This is because typically, we're just utilizing TS to do our TypeScript.

The output is handled, in a CRA setting, by react-scripts. We run yarn build and react-scripts bundles the output for production.

To recap, TypeScript compiles your React code to type-check your code. It doesn’t emit any JavaScript output (in most scenarios). The output is still similar to a non-TypeScript React project.

Can TypeScript Work with React and webpack?

Yes, TypeScript can work with React and webpack. Lucky for you, the official TypeScript Handbook has a guide on that.

Hopefully, that gives you a gentle refresher on how the two work together. Now, on to best practices!

Best Practices

We've researched the most common questions and put together this handy list of the most common use cases for React with TypeScript. This way, you can follow best practices in your projects by using this article as a reference.


One of the least fun, yet most important parts of development is configuration. How can we set things up in the shortest amount of time that will provide maximum efficiency and productivity? We'll discuss project setup including:

tsconfig.json ESLint Prettier VS Code extensions and settings.

Project Setup

The quickest way to start a React/TypeScript app is by using create-react-app with the TypeScript template. You can do this by running:

npx create-react-app my-app --template typescript

This will get you the bare minimum to start writing React with TypeScript. A few noticeable differences are:

the .tsx file extension the tsconfig.json the react-app-env.d.ts

The tsx is for "TypeScript JSX". The tsconfig.json is the TypeScript configuration file, which has some defaults set. The react-app-env.d.ts references the types of react-scripts, and helps with things like allowing for SVG imports.


Lucky for us, the latest React/TypeScript template generates tsconfig.json for us. However, they add the bare minimum to get started. We suggest you modify yours to match the one below. We've added comments to explain the purpose of each option as well:

{ "compilerOptions": { "target": "es5", // Specify ECMAScript target version "lib": [ "dom", "dom.iterable", "esnext" ], // List of library files to be included in the compilation "allowJs": true, // Allow JavaScript files to be compiled "skipLibCheck": true, // Skip type checking of all declaration files "esModuleInterop": true, // Disbles namespace imports (import * as fs from "fs") and enables CJS/AMD/UMD style imports (import fs from "fs") "allowSyntheticDefaultImports": true, // Allow default imports from modules with no default export "strict": true, // Enable all strict type checking options "forceConsistentCasingInFileNames": true, // Disallow inconsistently-cased references to the same file. "module": "esnext", // Specify module code generation "moduleResolution": "node", // Resolve modules using Node.js style "resolveJsonModule": true, // Include modules imported with .json extension "isolatedModules": true, // Transpile each file as a separate module "noEmit": true, // Do not emit output (meaning do not compile code, only perform type checking) "jsx": "react" // Support JSX in .tsx files "sourceMap": true, // *** Generate corrresponding .map file *** "declaration": true, // *** Generate corresponding .d.ts file *** "noUnusedLocals": true, // *** Report errors on unused locals *** "noUnusedParameters": true, // *** Report errors on unused parameters *** "experimentalDecorators": true // *** Enables experimental support for ES decorators *** "incremental": true // *** Enable incremental compilation by reading/writing information from prior compilations to a file on disk *** "noFallthroughCasesInSwitch": true // *** Report errors for fallthrough cases in switch statement *** }, "include": [ "src/**/*" // *** The files TypeScript should type check *** ], "exclude": ["node_modules", "build"] // *** The files to not type check *** }

The additional recommendations come from the [react-typescript-cheatsheet community]( and the explanations come from the Compiler Options docs in the Official TypeScript Handbook. This is a wonderful resource if you want to learn about other options and what they do.


In order to ensure that your code follows the rules of the project or your team, and the style is consistent, it's recommended you set up ESLint and Prettier. To get them to play nicely, follow these steps to set it up.

Install the required dev dependencies:

yarn add eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin eslint-plugin-react --dev

Create a .eslintrc.js file at the root and add the following:

module.exports = { parser: '@typescript-eslint/parser', // Specifies the ESLint parser extends: [ 'plugin:react/recommended', // Uses the recommended rules from @eslint-plugin-react 'plugin:@typescript-eslint/recommended', // Uses the recommended rules from @typescript-eslint/eslint-plugin ], parserOptions: { ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features sourceType: 'module', // Allows for the use of imports ecmaFeatures: { jsx: true, // Allows for the parsing of JSX }, }, rules: { // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs // e.g. "@typescript-eslint/explicit-function-return-type": "off", }, settings: { react: { version: 'detect', // Tells eslint-plugin-react to automatically detect the version of React to use }, }, };

Add Prettier dependencies:

yarn add prettier eslint-config-prettier eslint-plugin-prettier --dev

Create a .prettierrc.js file at the root and add the following:

module.exports = { semi: true, trailingComma: 'all', singleQuote: true, printWidth: 120, tabWidth: 4, };

Update the .eslintrc.js file:

module.exports = { parser: '@typescript-eslint/parser', // Specifies the ESLint parser extends: [ 'plugin:react/recommended', // Uses the recommended rules from @eslint-plugin-react 'plugin:@typescript-eslint/recommended', // Uses the recommended rules from the @typescript-eslint/eslint-plugin + 'prettier/@typescript-eslint', // Uses eslint-config-prettier to disable ESLint rules from @typescript-eslint/eslint-plugin that would conflict with prettier + 'plugin:prettier/recommended', // Enables eslint-plugin-prettier and displays prettier errors as ESLint errors. Make sure this is always the last configuration in the extends array. ], parserOptions: { ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features sourceType: 'module', // Allows for the use of imports ecmaFeatures: { jsx: true, // Allows for the parsing of JSX }, }, rules: { // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs // e.g. "@typescript-eslint/explicit-function-return-type": "off", }, settings: { react: { version: 'detect', // Tells eslint-plugin-react to automatically detect the version of React to use }, }, };

These recommendations come from a community resource written called "Using ESLint and Prettier in a TypeScript Project" by Robert Cooper. If you visit his blog, you can read more about the "why" behind these rules and configurations.

VSCode Extensions and Settings

We've added ESLint and Prettier and the next step to improve our DX is to automatically fix/prettify our code on save.

First, install the ESLint extension for VSCode. This will allow ESLint to integrate with your editor seamlessly.

Next, update your Workspace settings by adding the following to your .vscode/settings.json:

code block

This will allow VS Code to work its magic and fix your code when you save. It's beautiful!

These suggestions also come from the previously linked article "Using ESLint and Prettier in a TypeScript Project" by Robert Cooper.

The post React with TypeScript: Best Practices appeared first on SitePoint.

How Do Developers See Themselves? A Quantified Look

Jan 16, 2020


This article was originally published by SlashData. Thank you for supporting the partners who make SitePoint possible.

For the first time in our Q2 2019 Developer Economics survey, we tried to introduce developers in their own words by asking them about how they see themselves.

We provided a set of 21 words and asked them to choose up to five to form a word sketch of their personality. We also gave them the opportunity to provide their own text description.

Here’s what we got:


Over half of the developers say they are logical

Perhaps unsurprisingly, nearly six out of ten developers say they are logical. And as it turns out this is the most popular choice of description across all software development sectors, except in games development. Next in line, but some way behind, are the descriptors team player and introvert at 37% each. By comparison, just 10% label themselves as an extrovert. But can you guess which programmers consider themselves less introverted? Those involved in the AR/VR and IoT sector. Interesting, right?

Moving on to a slightly more unusual pair of labels: there are slightly more dog lovers than cat people in the developer population, although the numbers are close at 15% and 13% respectively. A much greater difference seems to exist though between developers working at night (night owls, 29%) and those who prefer the fresh morning breeze (early birds, 14%).


What about hobbies and spare time?

A third (33%) of developers say they are a reader, which makes it the most popular choice among spare-time activities. It is closely followed by 31% who say they are a gamer. Our data shows that developers tend to perceive themselves differently as they grow older. More than one in three developers up to the age of 34 years consider themselves to be a gamer, compared to fewer than one in four of the 35-44 age group, and fewer than one in five of the 45-54-years. Older programmers are more likely to describe themselves as readers.

“What’s this “real life” you’re talking about like? Is it similar to WoW? Does it run on a 64 bit OS?”

Other activities such as music and sport score lower, at 20% and 17%. A low 7% make LEGO models, although the popularity of LEGO seems to be very much dependent upon age. A respectable 12% of developers under 18 make LEGO models, but the proportion halves to 6% within the age group 18-24.

What about the artistic ones?

Even though a developer’s work demands a high level of creativity, just 14% use “artistic” to describe themselves. Those involved in games or in augmented reality and virtual reality development are far more likely than others to use this word to describe themselves. 21% of game developers and about 25% of AR/VR developers see themselves as artistic, as compared to 16% or less of desktop, web and backend developers.

Lastly, in out Q2 2019 Developer Economics survey, a few programmers were confused as to why we were asking the question and pondered if we were trying to set up a dating site. Well, we weren’t! We were collecting the data to create the State of the Developer Nation Report, 17th Edition.

Interested in joining forces with 40,000 developers worldwide in shaping the future of the developer ecosystem? Take our survey.

The post How Do Developers See Themselves? A Quantified Look appeared first on SitePoint.

Learn End-to-end Testing with Puppeteer

Jan 16, 2020


End-to-end Testing with Puppeteer

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

In this tutorial, we’ll learn what testing is, the different types of testing, and then we’ll use Puppeteer to perform end-to-end testing on our application. By the end of this tutorial, you should be able to end-to-end test your apps easily with Puppeteer.


For this tutorial, you need a basic knowledge of JavaScript, ES6+ and Node.js.

You must also have installed the latest version of Node.js.

We’ll be using yarn throughout this tutorial. If you don’t have yarn already installed, install it from here.

You should also know the basics of Puppeteer. To understand the basics of Puppeteer, check out this simple tutorial.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 13.3.0 npm 6.13.2 yarn 1.21.1 puppeteer 2.0.0 create-react-app 3.3.0 Introduction to Testing

In simple terms, testing is a process to evaluate the application works as expected. It helps in catching bugs before your application gets deployed.

There are four different types of testing:

Static Testing: uses a static type system like TypeScript, ReasonML, Flow or a linter like ESLint. This helps in capturing basic errors like typos and syntax. Unit Testing: the smallest part of an application, also known as a unit, is tested. Integration Testing: multiple related units are tested together to see if the application works perfectly in combination. End-to-end Testing: the entire application is tested from start to finish, just like a regular user would, to see if it behaves as expected.

The testing trophy by Kent C Dodds is a great visualization of the different types of testing:

Testing Trophy - Kent C Dodds

The testing trophy should be read bottom-to-top. If you perform these four levels of testing, you can be confident enough with the code you ship.

Now let’s perform end-to-end testing with Puppeteer.

End-to-end Testing with Puppeteer

Let's bootstrap a new React project with create-react-app, also known as CRA. Go ahead and type the following in the terminal:

$ npx create-react-app e2e-puppeteer

This will bootstrap a new React project in a e2e-puppeteer folder. Thanks to the latest create-react-app version, this will also install testing-library by default so we can test our applications easily.

Go inside the e2e-puppeteer directory and start the server by typing the following in the terminal:

$ cd e2e-puppeteer $ yarn start

It should look like this:

React Init

Our App.js looks like this:

import React from 'react'; import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} className="App-logo" alt="logo" /> <p> Edit <code>src/App.js</code> and save to reload. </p> <a className="App-link" href="" target="_blank" rel="noopener noreferrer" > Learn React </a> </header> </div> ); } export default App;

We’ll be testing the App.js function and the code will be written in App.test.js. So go ahead and open up App.test.js. It should have the following content:

import React from 'react'; import { render } from '@testing-library/react'; // 1 import App from './App'; test('renders learn react link', () => { // 2 const { getByText } = render(<App />); // 3 const linkElement = getByText(/learn react/i); // 4 expect(linkElement).toBeInTheDocument(); // 5 });

Here's what's happening in the code above:

We import the render function from the @testing-library/react package. We then use the global test function from Jest, which is our test runner installed by default through CRA. The first parameter is a string which describes our test, and the second parameter is a function where we write the code we want to test. Next up, we render the App component and destructure a method called getByText, which searches for all elements that have a text node with textContent. Then, we call the getByText function with the text we want to check. In this case, we check for learn react with the case insensitive flag. Finally, we make the assertion with the expect function to check if the text exists in the DOM.

This comes by default when we bootstrap with CRA. Go ahead and open up another terminal and type the following:

$ yarn test

When it shows a prompt, type a to run all the tests. You should now see this:

React Init Test

Now let's test this application with end-to-end testing.

Testing the Boilerplate with Puppeteer

Go ahead and install puppeteer as a dev dependency by typing the following in the terminal:

$ yarn add -D puppeteer

Now open up App.test.js and paste the following:

import puppeteer from "puppeteer"; // 1 let browser; let page; // 2 beforeAll(async () => { browser = await puppeteer.launch({ headless: false }); page = await browser.newPage(); await page.goto("http://localhost:3000/"); }); // 3 test("renders learn react link", async () => { await page.waitForSelector(".App"); const header = await page.$eval(".App-header>p", e => e.innerHTML); expect(header).toBe(`Edit <code>src/App.js</code> and save to reload.`); const link = await page.$eval(".App-header>a", e => { return { innerHTML: e.innerHTML, href: e.href }; }); expect(link.innerHTML).toBe(`Learn React`); expect(link.href).toBe(""); }); // 4 afterAll(() => { browser.close(); });

This is what we're doing in the code above:

Firstly, we import the puppeteer package and declare some global variables, browser and page. Then we have the beforeAll function provided by Jest. This runs before all tests are run. Here, we launch a new Chromium browser by calling puppeteer.launch(), while setting headless mode to false so we see what's happening. Then, we create a new page by calling browser.newPage() and then go to our React application's URL http://localhost:3000/ by calling the page.goto() function. Next up, we wait for the .App selector to load. When it loads, we get the innerHTML of .App-header>p selector by using the page.$eval() method and compare it with Edit src/App.js and save to reload.. We do the same thing with the .App-header>a selector. We get back innerHTML and href and then we compare them with Learn React and respectively to test our assertion with Jest's expect() function. Finally, we call the afterAll function provided by Jest. This runs after all tests are run. Here, we close the browser.

This test should automatically run and give you the following result:

E2E Test Puppeteer Basic

Let's go ahead and make a counter app.

The post Learn End-to-end Testing with Puppeteer appeared first on SitePoint.

15 Top WordPress Themes to Use in 2020

Jan 14, 2020


15 Top WordPress Themes to Use in 2020

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Overworked, overstressed, and flat out fed up with starting every website design from scratch? Here are some WordPress theme solutions you’ll appreciate.

Maybe you need to switch to an easy-to-use theme — a WordPress theme that’s crazy-fast and gives you reliable performance may be your cup of tea.

Tired of having to build your websites from scratch? It’s totally unnecessary unless for some reason you absolutely want to.

Before you blame yourself for the situation you find yourself in, consider this: maybe it’s the tools you’re using. You may be trying to build a house without the use of power tools, scaffolding, or helpful aids.

One of the following 15 top WordPress themes should prove to be the solution to your problem. In fact, more than one of them could probably serve quite nicely.

Grab a cup of coffee and let’s get started.

1. BeTheme: Responsive, Multi-purpose WordPress Theme

BeTheme: Responsive, Multi-purpose WordPress Theme

This biggest-of-them-all multipurpose WordPress theme can’t be beaten in terms of the huge array of “power” tools and design elements it places at your disposal. BeTheme is fast and flexible. It’s easy for beginners to work with. If trying to satisfy multiple clients has become more stressful than rewarding, BeTheme has a solution for that as well.

Be’s selection of 500+ customizable, responsive pre-built websites is the highlight and a proven stress reducer. These professionally crafted, pre-built websites cover 30 industry sectors, all the common websites, and an impressive range of business niches.

They also have UX features and functionalities built into them, potentially saving you a ton of design time.

BeTheme uses the popular Muffin Builder 3 page builder, with WPBakery as an option. There’s a Layouts Configurator if you really want to, or absolutely have to, build a page from scratch. It has a Shortcode Generator and a large selection of shortcodes that, together with Be’s drag and drop features, eliminates the need for coding. Be’s powerful Admin Panel provides unmatched flexibility.

I have purchased 4 of these themes at this point. Love the speed and build of them. Only wish list item would be a way to categorize and tag pages like you can with posts. — sharkyh2o

Click here and browse Be’s impressive collection of pre-built websites.

2. Total Theme

Total Theme

Total is another stress-reducing theme. This flexible and easy-to-use WordPress theme has been around for a while and has amassed a user base of 41,000 happy customers.

Total is drag and drop and it doesn’t require coding to build exactly the type of website you have in mind. Total is also developer friendly thanks to its system of hooks, filters, and snippets. There are more than 500 advanced customizing options available, plus 100+ page-builder elements and design modules to work with and 40+ pre-built demos to get any project off to a solid start. You won’t be burdened by third-party plugins either, since this WooCommerce-ready theme is compatible with all WordPress plugins. Very Friendly Very Simple Clean Code Good Flexibility Cool Elements Excelent custom panel Good integration with WooCommerce

Love this theme, it can do everything I need including shops, in a very good and easy way. — soswebdesign

Click here to discover if Total is the solution you’ve been looking for.

3. Avada


If you choose a best-selling theme, chances are it’s going to relieve rather than add to any stress you may be encountering. Avada is such a theme.

Its Dynamic Content System provides unmatched flexibility. Avada integrates totally with WooCommerce and includes product design drag and drop capabilities. 55+ pre-built websites are included to get you off to a fast start.

Great theme! As my first WordPress theme, it offers many options and continues to improve! — nwilger

Click here to find out more about this best-seller.

4. TheGem: Creative, Multi-Purpose, High-Performance WordPress Theme


Featuring the most beautiful designs for WordPress is what many web designers will tell you about TheGem. What really gets them excited, however, are the tools that come with the package.

Those same designers will tell you that TheGem is the ultimate WordPress toolbox. To name but just a few of the goodies, you’ll find:

plenty of pre-built, one-click installable websites over 400 modern and trendy design templates a ready-to-go fashion store

Great theme and great service. — bepreoo

Your very own ultimate toolbox is just a click or two away.

5. Uncode: Creative, Multiuse WordPress Theme


Bloggers, freelancers, and creatives of all types, plus small businesses and agencies, will benefit from making this ThemeForest bestseller with its 60K+ sales their theme of choice. This is doubly true if you need to create a portfolio or magazine-style website or any type or style of a page.

Features include:

a powerful front-end editor adaptive image and advanced grid systems WooCommerce compatibility and single product design and display features.

The star of the show is Uncode’s showcase of user-created websites. They tell a story of what Uncode could do for you, plus they are a source of inspiration.

Nice code, good support, design possibilities are endless. — zoutmedia

Visit Uncode and browse its showcase of user-built websites.

6. Houzez: Highly Customizable Real Estate WordPress Theme


There are some website types that a multi-purpose theme simply can’t help you with — usually because of unique and special features that are required. For the realestate sector, as an example, using a theme like Houzez is a must. Houzez’ unique functionalities include:

advanced property searching flexible property listings formatting a property management system

In addition, this drag and drop theme can easily be customized to match a realtor’s business model.

I really love the function and the appearance of the theme. — stuffmartusa2

If you happen to have a realtor for a client, look no further.

The post 15 Top WordPress Themes to Use in 2020 appeared first on SitePoint.

How Four Programmers Got Their First Python Jobs

Jan 13, 2020


How Four Programmers Got Their First Python Jobs

No one really knows how to do a job before they do it. Most people land a coveted position through a strange alchemy of related experience, networking, and hard work. The real experience is the job itself. That’s when you get the opportunity to apply what you know to real-world problems and see it pay off.

The following four programmers earned their first Python jobs in different ways. Some had prior Python experience, some didn’t. Some knew what they were getting into, others found out later. Understanding how they landed their first Python job might help you land yours. Here’s how they did it.

Nathan Grieve

First Python job: Data Scientist

How Nathan Got the Job

While completing my Physics degree, I applied for a data science job with a small tech startup that primarily used Python (and SQL). The thing is, I didn’t have experience with Python at the time. When the interview came around, I answered the programming questions by using pseudocode to demonstrate I understood the concepts.

Pseudocode uses coding logic without using coding syntax. So by using the same logic that Python does, I could show an understanding of the concepts without being specific to any language.

For example, any computer scientist can understand the simple pseudocode below, but they may not understand the Python function unless they've worked with it before.


loop_index = 0 while loop_index < 5: print(loop_index) loop_index += 1


Set loop index to 0 Loop while loop index is less than 5 print loop index Increase loop index by 1

Pseudocode is more readable to humans, too. It’s not actually much different from code, it just avoids using language-specific syntax. And using it it worked! They gave me the job. But of course, before I arrived I had to actually learn the language.

Nathan's Advice

My advice for those wanting to enter the field is to tackle real-world problems as soon as you can. At Project Hatch, a company I cofounded that analyzes startups and provides them with analytics to grow their businesses, we do hire people who are self taught, but there's a huge skill gap between those who only do Codecademy-style courses and those who actually apply their knowledge. I would say keep working through Codewars challenges until you’re at a point where you don’t have to repeatedly look up what arguments you should be using and what order they should be used in.

If you’re looking for real-world problems to solve, go on Kaggle, which has a huge number of data sets to play with, and practice pulling useful information out of them. For example, if you’re looking at a data set for food recipes, align the data set with local food prices to find all of the recipes that create meals for under $5. When you’re ready for a real challenge, try Kaggle competitions. You'll find problems to solve and companies willing to pay. These challenges will be incredibly difficult to begin with, but you'll learn a lot discussing solutions with other computer scientists on the forum.

Bill Price

First Python job: Cyber Security Architect

How Bill Got the Job

I had supported Python developers for a number of years as a NASA network administrator and security engineer, so I was aware of the power and flexibility of the language before a new opportunity presented itself.

In 2017, I was approached by a major financial institution to join a team charged with developing a new assessment program to identify monitoring gaps in a particular business process and its supporting applications. I believe they came to me because of my:

network and security experience lack of experience in the financial sector, as they wanted a fresh set of technical eyes on their problem ability to tease out what actual requirements are ability to approach a new project with an open mind and no preconceived notions.

Funnily enough, and unbeknownst to me, this turned out to be my first Python job.

Our team was expected to triage the gaps, identify possible mitigations, and report our findings to leadership. We began by mapping applications to each business process, but quickly realized that the size of the different data sets we needed to review (application and hardware inventories, Qualys vulnerability scans, daily BladeLogic reports, Splunk logs, etc.) were too large for import into Excel spreadsheets. Furthermore, we didn't have access to traditional UNIX text processing resources or administrative access to our workstation, where we might have installed any new data management tools. And we didn’t have the budget to purchase new tools.

We did, however, have access to Python, a full set of Python libraries, and the ability to install Python using existing enterprise support software.

I didn’t know Python going in. I had to learn on the job, and good thing I did. Python was critical in our being able to parse hardware inventories based on applications used by the business process, isolate vulnerabilities associated with the appropriate hardware, and identify unauthorized services running on any device that supported one (or more) applications.

Bill’s Advice

My advice to aspiring Python developers is threefold.

First, familiarize yourself with the different libraries available in Python that might assist you in a potential job. Our team used mechanize, cookielib, urlib, urlib2, and csv extensively. If you're looking at a machine-learning project, pay attention to libraries like TensorFlow, Numpy, and Keras.

Next, be on the lookout for processes that need to be automated, or where existing automation can be improved. There's likely an opportunity for applying Python.

Lastly, have a good Python reference book to supplement all of the online resources that are available. I recommend T.J. O'Connor's Violent Python.

The post How Four Programmers Got Their First Python Jobs appeared first on SitePoint.

4 Reasons to Use Image Processing to Optimize Website Media

Jan 10, 2020


4 Reasons to Use Image Processing to Optimize Website Media

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Image optimization is a big deal when it comes to website performance. You might be wondering if you’re covering all the bases by simply keeping file size in check. In fact, there’s a lot to consider if you truly want to optimize your site’s images.

Fortunately, there are image processing tools and content delivery networks (CDNs) available that can handle all the complexities of image optimization. Ultimately, these services can save you time and resources, while also covering more than one aspect of optimization.

In this article, we’ll take a look at the impact image optimization can have on site performance. We’ll also go over some standard approaches to the problem, and explore some more advanced image processing options. Let’s get started!

Why Skimping on Image Optimization Can Be a Performance Killer

If you decide not to optimize your images, you’re essentially tying a very heavy weight to all of your media elements. All that extra weight can drag your site down a lot. Fortunately, optimizing your images trims away the unnecessary data your images might be carrying around.

If you’re not sure how your website is currently performing, you can use an online tool to get an overview.

Results of a website speed test

Once you have a better picture of what elements on your website are lagging or dragging you down, there are a number of ways you can tackle image optimization specifically, including:

Choosing appropriate image formats. There are a number of image formats to choose from, and they each have their strengths and weaknesses. In general, it’s best to stick with JPEGs for photographic images. For graphic design elements, on the other hand, PNGs are typically superior to GIFs. Additionally, new image formats such as Google’s WebP have promising applications, which we’ll discuss in more detail later on. Maximizing compression type. When it comes to compression, the goal is to get each image to its smallest “weight” without losing too much quality. There are two kinds of compression that can do that: “lossy” and “lossless”. A lossy image will look similar to the original, but with some decrease in quality, whereas a lossless image is nearly indistinguishable from the original but also heavier. Designing with the image size in mind. If you’re working with images that need to display in a variety of sizes, it’s best to provide all the sizes you’ll need. If your site has to resize them on the fly, that can negatively impact speeds. Exploring delivery networks. CDNs can be a solution to more resource-heavy approaches for managing media files. A CDN can handle all of your image content, and respond to a variety of situations to deliver the best and most optimized files.

As with any technical solution, you’ll have to weigh the pros and cons of each approach. However, it’s also worth noting that these more traditional approaches aren’t the only options you have available to you.

4 Reasons to Use Image Processing for Optimizing Your Website’s Media

As we mentioned above, CDNs are one possible way to solve image performance conundrums on your website. One example of the services a CDN can provide is found in KeyCDN’s image processing.

This particular service is a real-time image processing and delivery option. This means it can detect how a user is viewing your site, and provide the optimal image type for that use case. Let’s look at four reasons this can be a very effective feature.

1. You Can Convert Your Images to Advanced Formats

We’ve already discussed how PNG and JPEG are the most common and recommended formats for graphic and photographic elements respectively. You might not know, however, that there’s a new file format available that might be beneficial when you’re looking to boost your site’s performance.

We’re talking about WebP, which is Google’s new, modern image file format.

webp logoThe WebP logo. Source: WikiMedia Commons

The WebP format can work with both lossy and lossless compression, and supports transparency. Plus, the files themselves hold a lot of potential when it comes to optimization and performance.

This is because WebP lossless files are up to 26% smaller than PNGs of equivalent quality. In fact, KeyCDN did a study to compare just how much of an impact the WebP format can have. It found an overall 77 percent decrease in page size when converting from JPG to WebP.

Consequently, KeyCDN offers conversion to WebP. This feature uses lossless compression, and the most appropriate image can then be served up to each user based on browser specifications and compatibility.

In addition to conversion, there’s also a WebP Caching feature that offers a one-click solution for existing users. Without changing anything else, KeyCDN users can easily take advantage of WebP images via this option.

The post 4 Reasons to Use Image Processing to Optimize Website Media appeared first on SitePoint.

JavaScript’s New Private Class Fields, and How to Use Them

Jan 10, 2020


JavaScript's New Private Class Fields, and How to Use Them

ES6 introduced classes to JavaScript, but they can be too simplistic for complex applications. Class fields (also referred to as class properties) aim to deliver simpler constructors with private and static members. The proposal is currently a TC39 stage 3: candidate and is likely to be added to ES2019 (ES10). Private fields are currently supported in Node.js 12, Chrome 74, and Babel.

A quick recap of ES6 classes is useful before we look at how class fields are implemented.

ES6 Class Basics

JavaScript's object-oriented inheritance model can confuse developers coming from languages such as C++, C#, Java, and PHP. For this reason, ES6 introduced classes. They are primarily syntactical sugar but offer more familiar object-oriented programming concepts.

A class is an object template which defines how objects of that type behave. The following Animal class defines generic animals (classes are normally denoted with an initial capital to distinguish them from objects and other types):

class Animal { constructor(name = 'anonymous', legs = 4, noise = 'nothing') { this.type = 'animal'; = name; this.legs = legs; this.noise = noise; } speak() { console.log(`${} says "${this.noise}"`); } walk() { console.log(`${} walks on ${this.legs} legs`); } }

Class declarations always execute in strict mode. There's no need to add 'use strict'.

The constructor method is run when an object of the Animal type is created. It typically sets initial properties and handles other initializations. speak() and walk() are instance methods which add further functionality.

An object can now be created from this class with the new keyword:

let rex = new Animal('Rex', 4, 'woof'); rex.speak(); // Rex says "woof" rex.noise = 'growl'; rex.speak(); // Rex says "growl" Getters and Setters

Setters are special methods used to define values only. Similarly, Getters are special methods used to return a value only. For example:

class Animal { constructor(name = 'anonymous', legs = 4, noise = 'nothing') { this.type = 'animal'; = name; this.legs = legs; this.noise = noise; } speak() { console.log(`${} says "${this.noise}"`); } walk() { console.log(`${} walks on ${this.legs} legs`); } // setter set eats(food) { = food; } // getter get dinner() { return `${} eats ${ || 'nothing'} for dinner.`; } } let rex = new Animal('Rex', 4, 'woof'); rex.eats = 'anything'; console.log( rex.dinner ); // Rex eats anything for dinner. Child or Sub-classes

It's often practical to use one class as the base for another. A Human class could inherit all the properties and methods from the Animal class using the extends keyword. Properties and methods can be added, removed, or changed as necessary so human object creation becomes easier and more readable:

class Human extends Animal { constructor(name) { // call the Animal constructor super(name, 2, 'nothing of interest'); this.type = 'human'; } // override Animal.speak speak(to) { super.speak(); if (to) console.log(`to ${to}`); } }

super refers to the parent class, so it’s usually the first call made in the constructor. In this example, the Human speak() method overrides that defined in Animal.

Object instances of Human can now be created:

let don = new Human('Don'); don.speak('anyone'); // Don says "nothing of interest" to anyone don.eats = 'burgers'; console.log( don.dinner ); // Don eats burgers for dinner.

The post JavaScript’s New Private Class Fields, and How to Use Them appeared first on SitePoint.

The Top 10 SitePoint Guides & Tutorials of 2019

Jan 9, 2020


In 2019, we published hundreds of new guides, tutorials, and articles. Whether we showed you how to use new technologies and tools, or published career advice from people at the top of their game, our aim was always to help you level up as a web developer.

Though tech moves fast, all of those articles are still relevant now in the start of 2020. To celebrate a year concluded, we wanted to take a look at the 10 pieces our readers enjoyed and shared the most in 2019. Hopefully, there's something here that's useful to you going into this new year.

What Is Functional Programming?

As a programmer, you probably want to write elegant, maintainable, scalable, predictable code. The principles of functional programming, or FP, can significantly aid in these goals. Ali Spittel walks you through these principles, using JavaScript to demonstrate them.

➤ Read What Is Functional Programming?

10 Must-have VS Code Extensions for JavaScript Developers

Visual Studio Code is undoubtedly the most popular lightweight code editor today. It does borrow heavily from other popular code editors, mostly Sublime Text and Atom. However, its success mainly comes from its ability to provide better performance and stability. In addition, it also provides much-needed features like IntelliSense, which were only available in full-sized IDEs like Eclipse or Visual Studio 2017.

The power of VS Code no doubt comes from the marketplace. Thanks to the wonderful open-source community, the editor is now capable of supporting almost every programming language, framework, and development technology. Support for a library or framework comes in various ways, which mainly includes snippets, syntax highlighting, Emmet and IntelliSense features for that specific technology.

➤ Read 10 Must-have VS Code Extensions for JavaScript Developers

Why the Highest Paid Developers "Fight" Their Co-workers

Most employees want to keep their jobs and their clients. They don’t have the leverage or control they want over their own careers. They need their job. In fact, most people are terrified of losing their jobs.

Research shows the fear of losing your job creates job dissatisfaction and a lack of commitment at work. This, in turn, affects job performance, negatively increasing the likelihood that you will lose your job. It’s a vicious cycle that seems to repeat itself over and over.

But there’s something worse than the fear of a job loss.

➤ Read Why the Highest Paid Developers "Fight" Their Co-workers

How to Tell If Vue.js Is the Right Framework for Your Next Project

Vue.js grew from a one-man project to a JavaScript framework everyone’s talking about. You’ve heard about it from your front-end colleagues and during conferences. You’ve probably read multiple comparisons between Vue, React, and Angular. And you’ve probably also noticed that Vue outranks React in terms of GitHub stars.

All that’s made you wonder whether Vue.js is the right framework for your next project? Well, let’s explore the possibilities and limitations of Vue to give you a high-level look at the framework and make your decision a little easier.

➤ Read How to Tell If Vue.js Is the Right Framework for Your Next Project

JavaScript Web Workers: A Beginner's Guide

Today’s mobile devices normally come with 8+ CPU cores, or 12+ GPU cores. Desktop and server CPUs have up to 16 cores, 32 threads, or more. In this environment, having a dominant programming or scripting environment that is single-threaded is a bottleneck.

JavaScript is single-threaded. This means that by design, JavaScript engines — originally browsers — have one main thread of execution, and, to put it simply, process or function B cannot be executed until process or function A is finished. A web page’s UI is unresponsive to any other JavaScript processing while it is occupied with executing something — this is known as DOM blocking.

The solution: web workers.

➤ Read JavaScript Web Workers: A Beginner's Guide

React vs Angular: An In-depth Comparison

Should I choose Angular or React? Each framework has a lot to offer and it’s not easy to choose between them. Whether you’re a newcomer trying to figure out where to start, a freelancer picking a framework for your next project, or an enterprise-grade architect planning a strategic vision for your company, you’re likely to benefit from having an educated view on this topic.

➤ Read React vs Angular: An In-depth Comparison

Fetching Data from a Third-party API with Vue.js and Axios

More often than not, when building your JavaScript application, you’ll want to fetch data from a remote source or consume an API. I recently looked into some publicly available APIs and found that there’s lots of cool stuff that can be done with data from these sources.

With Vue.js, you can literally build an app around one of these services and start serving content to users in minutes.

I’ll demonstrate how to build a simple news app that will show the top news articles of the day allow users to filter by their category of interest, fetching data from the New York Times API.

➤ Read Fetching Data from a Third-party API with Vue.js and Axios

How to Install Docker on Windows 10 Home

If you’ve ever tried to install Docker for Windows, you’ve probably come to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

In this tutorial, I’ll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine.

➤ Read How to Install Docker on Windows 10 Home

How to Use Windows Subsystem for Linux 2 and Windows Terminal

In this article, you’ll learn how you can use Windows Subsystem for Linux 2 to set up and run a local Linux shell interface in Windows without using a virtual machine. This not like using terminals such as Git Bash or cmder that have a subset of UNIX tools added to $PATH. This is actually like running a full Linux kernel on Windows that can execute native Linux applications. That’s pretty awesome, isn’t it?

➤ Read How to Use Windows Subsystem for Linux 2 and Windows Terminal

How to Migrate to Gulp.js 4.0

Despite competition from webpack and Parcel, Gulp.js remains one of the most popular JavaScript task runners. Gulp.js is configured using code which makes it a versatile, general-purpose option. As well as the usual transpiling, bundling and live reloading, Gulp.js could analyze a database, render a static site, push a Git commit, and post a Slack message with a single command.

➤ Read How to Migrate to Gulp.js 4.0

Happy New Year from SitePoint

We hope you all had a restful break and have come back recharged and ready to tackle your goals for this new year. We'll continue to collaborate with working developers to help you improve your skills this year, and we'll explore new areas that we hope you'll find both useful and exciting. And we'll continue our work on leveling SitePoint Premium up into a next-generation learning platform and comprehensive reference library. Happy New Year from SitePoint!

The post The Top 10 SitePoint Guides & Tutorials of 2019 appeared first on SitePoint.

How to Edit Source Files Directly in Chrome

Jan 7, 2020


How to Edit Source Files Directly in Chrome

A web developer's typical day involves creating HTML web pages with associated CSS and JavaScript in their favorite editor. The workflow:

Open the locally hosted page in a browser. Swear. Open DevTools to investigate the layout and functionality problems. Tweak the HTML elements, CSS properties, and JavaScript code to fix the issues. Copy those changes back into the editor and return to step #1.

While tools such as live reloading have made this process easier, many developers continue to tweak code in both DevTools and their editor.

However, it's possible to open and edit source files directly in Chrome. Any changes you make are saved to the file system and updated within the editor (presuming it refreshes when file changes occur).

Step 1: Launch Developer Tools

Open Chrome, load a page from your local file system/server and open Developer Tools from the More tools menu or press F12 or Ctrl/Cmd + Shift + I depending on your system. Navigate to the Sources tab to examine the file explorer:

Chrome DevTools Sources

You can open and edit CSS and JavaScript files in this view, but any changes will be lost as soon as you refresh the page.

Step 2: Associate a Folder with the Workspace

Click the Filesystem tab, then click + Add folder to workspace. You’ll be prompted to locate your work folder and Chrome will ask you to confirm that you Allow access. The explorer shows files on your system which can be opened with a single click:

Chrome DevTools file system

The post How to Edit Source Files Directly in Chrome appeared first on SitePoint.

How to Create Printer-friendly Pages with CSS

Jan 6, 2020


How to Create Printer-friendly Pages with CSS

In this article, we review the art of creating printer-friendly web pages with CSS.

"Who prints web pages?" I hear you cry! Relatively few pages will ever be reproduced on paper. But consider:

printing travel or concert tickets reproducing route directions or timetables saving a copy for offline reading accessing information in an area with poor connectivity using data in dangerous or dirty conditions — for example, a kitchen or factory outputting draft content for written annotations printing web receipts for bookkeeping purposes providing documents to those with disabilities who find it difficult to use a screen printing a page for your colleague who refuses to use this newfangled t'internet nonsense.

Unfortunately, printing pages can be a frustrating experience:

text can be too small, too large, or too faint columns can be too narrow, too wide, or overflow page margins sections may be cropped or disappear entirely ink is wasted on unnecessary colored backgrounds and images link URLs can't be seen icons, menus, and advertisements are printed which could never be clicked!

Many developers advocate web accessibility, yet few remember to make the printed web accessible!

Converting responsive, continuous media to paged paper of any size and orientation can be challenging. However, CSS print control has been possible for many years, and a basic style sheet can be completed within hours. The following sections describe well-supported and practical options for making your pages printer-friendly.

Print Style Sheets

Print CSS can either be:

Applied in addition to screen styling. Taking your screen styles as a base, the printer styles override those defaults as necessary. Applied as separate styles. The screen and print styles are entirely separate; both start from the browser's default styles.

The choice will depend on your site/app. Personally, I use screen styles as a print base most of the time. However, I have used separate style sheets for applications with radically different outputs — such as a conference session booking system which displayed a timetable grid on-screen but a chronological schedule on paper.

A print style sheet can be added to the HTML <head> after the standard style sheet:

<link rel="stylesheet" href="main.css" /> <link rel="stylesheet" media="print" href="print.css" />

The print.css styles will be applied in addition to screen styles when the page is printed.

To separate screen and print media, main.css should target the screen only:

<link rel="stylesheet" media="screen" href="main.css" /> <link rel="stylesheet" media="print" href="print.css" />

Alternatively, print styles can be included within an existing CSS file using @media rules. For example:

/* main.css */ body { margin: 2em; color: #fff; background-color: #000; } /* override styles when printing */ @media print { body { margin: 0; color: #000; background-color: #fff; } }

Any number of @media print rules can be added, so this may be practical for keeping associated styles together. Screen and print rules can also be separated if necessary:

/* main.css */ /* on-screen styles */ @media screen { body { margin: 2em; color: #fff; background-color: #000; } } /* print styles */ @media print { body { margin: 0; color: #000; background-color: #fff; } } Testing Printer Output

It's not necessary to kill trees and use outrageously expensive ink every time you want to test your print layout! The following options replicate print styles on-screen.

Print Preview

The most reliable option is the print preview option in your browser. This shows how page breaks will be handled using your default paper size.

Alternatively, you may be able to save or preview the page by exporting to a PDF.

Developer Tools

The DevTools (F12 or Cmd/Ctrl + Shift + I) can emulate print styles, although page breaks won't be shown.

In Chrome, open the Developer Tools and select More Tools, then Rendering from the three-dot icon menu at the top right. Change the Emulate CSS Media to print at the bottom of that panel.

In Firefox, open the Developer Tools and click the Toggle print media simulation icon on the Inspector tab's style pane:

Firefox print preview mode

Hack Your Media Attribute

Presuming you're using a <link> tag to load printer CSS, you could temporarily change the media attribute to screen:

<link rel="stylesheet" href="main.css" /> <link rel="stylesheet" media="screen" href="print.css" />

Again, this won't reveal page breaks. Don't forget to restore the attribute to media="print" once you finish testing.

Remove Unnecessary Sections

Before doing anything else, remove and collapse redundant content with display: none;. Typical unnecessary sections on paper could include navigation menus, hero images, headers, footers, forms, sidebars, social media widgets, and advertising blocks (usually anything in an iframe):

/* print.css */ header, footer, aside, nav, form, iframe, .menu, .hero, .adslot { display: none; }

It may be necessary to use display: none !important; if CSS or JavaScript functionality is showing elements according to particular UI states. Using !important isn't normally recommended, but we can justify its use in a basic set of printer styles which override screen defaults.

Linearize the Layout

It pains me to say this, but Flexbox and Grid rarely play nicely with printer layouts in any browser. If you encounter issues, consider using display: block; or similar on layout boxes and adjust dimensions as necessary. For example, set width: 100%; to span the full page width.

Printer Styling

Printer-friendly styling can now be applied. Recommendations:

ensure you use dark text on a white background consider using a serif font, which may be easier to read adjust the text size to 12pt or higher modify paddings and margins where necessary. Standard cm, mm, or in units may be more practical.

Further suggestions include …

The post How to Create Printer-friendly Pages with CSS appeared first on SitePoint.

How to Quickly and Easily Remove a Background in Photoshop

Dec 19, 2019


How to Quickly and Easily Remove a Background in Photoshop

This article on how to remove a background in Photoshop remains one of our most popular posts and was updated in 2019 for Adobe Photoshop 2020.

Photoshop offers many different techniques for removing a background from an image. For simple backgrounds, using the standard magic wand tool to select and delete the background may well be more than adequate. For more complicated backgrounds, you might use the Background Eraser tool.

The Background Eraser Tool

The Background Eraser tool samples the color at the center of the brush and then deletes pixels of a similar color as you "paint". The tool isn’t too difficult to get the hang of. Let me show you how it works.

Remove a Background, Step 1: Open your Image

Start by grabbing an image that you want to remove the background from. I'll be using the image below, as it features areas that range from easy removal through to more challenging spots. I snagged this one for free from Unsplash.

The example image: man standing against lattice background

Now let's open it in Photoshop.

The example image opened in Photoshop

Remove a Background, Step 2: Select Background Eraser

Select the Background Eraser tool from the Photoshop toolbox. It may be hidden beneath the Eraser tool. If it is, simply click and hold the Eraser tool to reveal it. Alternatively, you can press Shift + E to cycle through all the eraser tools to get to the Background Eraser. If you had the default Eraser tool selected, press Shift + E twice to select the Background Eraser Tool.

choosing the background eraser tool

Remove a Background, Step 3: Tune Your Tool Settings

On the tool options bar at the top of the screen, select a round, hard brush. The most appropriate brush size will vary depending on the image you're working on. Use the square bracket key ([ or ]) for quickly scaling your brush size.

selecting a brush

Alternatively, you can right-click your mouse anywhere on the artboard to change the size and hardness of your brush too.

alternative way to change brush size

Next, on the tool options bar, make sure Sampling is set to Continuous (it’s the first of the three icons), the Limits to Find Edges* and the *Tolerance has a range of 20-25%.

sampling, limits and tolerance

Note: a lower tolerance means the eraser will pick up on fewer color variations. While a higher tolerance expands the range of colors your eraser will select.

Remove a Background, Step 4: Begin Erasing

Bring your brush over your background and begin to erase. You should see a brush-sized circle with small crosshairs in the center. The crosshairs show the "hotspot" and delete that color wherever it appears inside the brush area. It also performs smart color extraction at the edges of any foreground objects to remove “color halos” that might otherwise be visible if the foreground object is overlaid onto another background.

beginning the process

When erasing, zoom up your work area and try to keep the crosshairs from overlapping on the edge of your foreground. It's likely that you’ll need to reduce the size of the brush in some places to ensure that you don't accidentally erase part of your foreground subject.

The post How to Quickly and Easily Remove a Background in Photoshop appeared first on SitePoint.

5 Signs It’s Time to Quit Your Job

Dec 16, 2019


"Jerry wouldn't let me go to the emergency room."

Jenny010137 recounted her story on Reddit. She had a major health crisis, but Jerry, her boss, wasn't buying it.

Jerry wouldn't let me go to the emergency room after the heavy vaginal bleeding I had been experiencing suddenly got way worse. I went over his head and got permission to go. I called my mom, told her to meet me in the ER. The ER nurse said he'd never seen so much blood. An ER nurse said this. It's determined I need a couple of blood transfusions and will be admitted.

Jenny's mom calls Jerry on her behalf.

My mom calls Jerry, who then proceeds to tell her that it's just stress, and I NEED TO GET BACK TO WORK. At this point, I couldn't even lift my own head up, but sure, I can take a bus across town and go back to work.

Doctors told Jenny they found a large growth that needed a biopsy.

They found a large growth that needed a biopsy. Jerry kept insisting that it couldn't be cancer because I'd be tired and losing weight. I had lost eight pounds in a week and went to bed the minute I got home. I was still recovering from the procedure when Jerry called me to let me know I was fired for taking too much time off. Five days later, I was diagnosed with cancer. Fuck you, Jerry. Fuck you.

Think about that for a second.

Jenny is losing blood rapidly. There's a good chance she's dying. Her boss can't be bothered to verify that she's okay. While she's in the hospital fighting for her life, he fires her for taking "too much time off."

This situation is obviously one to walk away from.

But it's not always so clear cut.

Sometimes you're in a situation where there are both positive and negative aspects of the job. With situations like these, the decision isn't always as obvious as we'd like it to be. Walk away from a promising position prematurely and you may burn bridges and destroy any goodwill you've built up.

What's the best way to know?

If you focus on the signs, you may be right, but too much uncertainty means you may handle things in a way that's less than ideal.

There's a better way.

Focus your attention on the right set of principles and you'll have the framework you need to decide when it's time to quit your job (or not). Let's take a look at these principles.

Principle #1: Your Job Violates Your Boundaries

Art Markman, professor of psychology at the University of Texas at Austin, shared a story relayed to him by a reader.

My mother suddenly passed away on a Friday evening. On the Sunday my boss showed up to my house with groceries and flowers and suggested that I go into the office on Monday for the quarterly meeting. After all, "this was a pivotal time" for the business.

I didn't go in the next day because of my overwhelming grief. I later found out that I was to receive an award on that Monday. Was this a career-limiting move, or is my boss not clear on boundaries?

This boss meant well, but his concern was self-serving and not at all in the best interests of his employee. What's worse, he may not have understood why it was a problem if his employee spoke to him about it later on.

This is why you need boundaries.

Boundaries act as gatekeepers in a variety of professional, emotional, social, physical and situations. Here's why you need boundaries and why they're so important:

They protect you from abusive or toxic behavior (for example, managers or co-workers making inappropriate demands, verbal abuse, inappropriate conversation, or details that are immoral or infringe on your values). Boundaries define how others can or should communicate with you. Good boundaries protect you from sacrificing your autonomy, freedom of choice, family, identity, integrity or contacts. Great boundaries attract more of the people, projects and opportunities you want. When set up appropriately, these boundaries repel the items you don't want.

How do you set great boundaries?

It's a simple process. First, determine what you do and don't want. Next, figure out what your employer wants or doesn't want.

Sounds simple, right?

Figuring out what you want is really about asking the right question (see above). Figuring out what your employer wants is really about identifying criteria that are documented in some way. That's important, because it gives you the leverage you need to protect yourself (legally) against any inappropriate behavior.

But setting boundaries is risky.

Consider this common idea: Tell your boss No and you could get fired (or worse). If developers are smart, they'll avoid biting the hand that feeds them.

This rationale is trash.

If you set a boundary, it will be tested. Those around you — your manager, co-workers, other developers — will attempt to back you into a corner. You're going to have to find appropriate ways to rise to the challenge and enforce your boundaries.

Why go to the trouble? Because boundaries limit the damage from the other four principles discussed in this article. If you don't have strong boundaries, you'll face the problems discussed here. It doesn't matter if you're employed or you own your own business.

If you have poor boundaries, you won't be able to achieve your goals.

Principle #2: Your Job Goes Against Your Goals

Reddit user YellowRoses had goals until their boss torpedoed those goals.

How do you deal with feeling disrespected by your boss? from r/careerguidance

They were promised a promotion. They negotiated with their boss and earned a verbal agreement regarding their promotion, only for said promotion to be denied with an "Oh, that's not happening now." No explanation or attempts at justifying the rescinded promise.

What if your employer isn't aware of your goals? Still doesn't matter. If you have a specific goal in mind, you're responsible for that goal. Not your co-workers, employer, or family members. Are you pushing for the director's position that's opened up? Prefer to stay in your current role but receive the same pay as managers? It's on you.

This seems obvious, until you realize most people wait to be chosen. They wait for someone to approve of their audition, accept them, recruit them, promote them, extend a helping hand, etc. Which goes nowhere fast.

To be clear, it's generally a good idea to discuss your goals with your employer, provided that you're in a good place to do so. If your employer laughs at you, mocks your goal, or decides they're unwilling to help you meet said goals, it's on you to make it happen.

The post 5 Signs It’s Time to Quit Your Job appeared first on SitePoint.

The Evolution of JavaScript Tooling: A Modern Developer’s Guide

Dec 12, 2019


The Evolution of JavaScript Tooling: A Modern Developer’s Guide

This article was created in partnership with Sencha. Thank you for supporting the partners who make SitePoint possible.

JavaScript application source code has traditionally been hard to understand, due to code being spread across JavaScript, HTML, and CSS files, as well as events and data flowing through a number of non intuitive paths. Like all software, the JavaScript development environment includes bundlers, package managers, version control systems, and test tools. Each of these requires some learning curve.

Inconsistencies and incompatibilities between browsers have historically required various tweaks and special cases to be sprinkled around the code, and very often fixing a bug in one browser breaks something on another browser. As a result, development teams struggle to create and maintain high quality, large-scale applications while the demand for what they do soars, especially at the enterprise-application level where business impact has replaced “How many lines of code have you laid down?”

To deal with this complexity, the open-source community as well as commercial companies have created various frameworks and libraries, but these frameworks and libraries have become ever more complicated as they add more and more features in an attempt to make it easier for the developer. Still, frameworks and libraries offer significant advantages to developers and can also organize and even reduce complexity.

This guide discusses some of the more popular frameworks and libraries that have been created to ease the burden of writing complex user interface (UI) code and how enterprise applications, especially data-intensive apps, can benefit from using these frameworks and UI components to deliver applications faster, with better quality, and yet stay within any development shop’s budget.

Complexity of Modern Web Development

Andrew S. Tanenbaum, the inventor of Minix (a precursor to Linux often used to bring up new computer chips and systems), once said1, “The nice thing about standards is that you have so many to choose from.” Browsers followed a number of standards, but not all of them, and many just went their own way.

That’s where the trouble started — the so-called “Browser Wars.” How each browser displayed the data from these websites could be quite different. Browser incompatibilities still exist today, and one could say they are a little worse because the Web has gone mobile.

Developing in today’s world means being as compatible as possible with as many of the popular web browsers as possible, including mobile and tablet.

What about mobile?

Learning Android Java (Android) can be difficult if the developer hasn’t been brought up with Java. For Apple iOS, Objective C is a mashup of the C programming language and Smalltalk, which is different but not entirely alien to C++ developers. (After all, object-oriented concepts are similar.) But given the coming of (Apple) Swift and a new paradigm, “protocol-oriented programming,” Objective C has a questionable future.

In contrast, the JavaScript world, through techniques such as React Native or Progressive Web Apps, allows for development of cross-platform apps that look like native apps and are performant. From a business perspective, an enterprise can gain a number of advantages by only using one tool set to build sophisticated web and mobile apps.

Constant change causes consternation

The JavaScript world is particularly rich in how much functionality and how many packages are available. The number is staggering. The number of key technologies that help developers create applications faster is also large, but the rate of change in this field causes what’s called “JavaScript churn,” or just churn. For example, when Angular moved from version 1 to 2 (and again from 3 to 4), the incompatibilities required serious porting time. Until we embrace emerging Web Components standards, not everything will interoperate with everything else.

One thing that can be said is that investing in old technologies not backed by standards can be career-limiting, thus the importance of ECMA and ECMAScript standards as well as adherence to more or less common design patterns (most programming is still, even to this day, maintenance of existing code rather than fresh new starts and architectures). Using commonly used design patterns like Model-View-Controller (MVC), Model-View-Viewmodel (MVVM), and Flux means that your code can be modified and maintained more easily than if you invent an entirely new paradigm.

Having large ecosystems and using popular, robust, well-supported tools is one strategy proven year after year to yield positive results for the company and the developer’s career, and having industry-common or industry-standard libraries means that you can find teammates to help with the development and testing. Modern development methodologies practically demand the use of frameworks, reusable libraries, and well-designed APIs and components.

Popularity of Modern Frameworks and Libraries

Stack Overflow, an incredibly popular developers website used for questions and answers (#57 according to Alexa as of January 2019), tracks a great deal of data on the popularity of various technologies and has become a go-to source for developers. Their most recent survey continued to show the incredible popularity of both JavaScript and JavaScript libraries and frameworks:

NPM Downloads of Popular Front-end LibrariesNPM Downloads of Popular Frontend Libraries. (Source)

According to Stack Overflow, based on the type of tags assigned to questions, the top eight most discussed topics on the site are JavaScript, Java, C#, PHP, Android, Python, jQuery and HTML — not C, C++, or more exotic languages like Ocaml or Haskell. If you’re building websites, you’re very likely going to want to use technologies that are popular because the number of open-source and commercial/supported products provides you with the ability to code and test more quickly, resulting in faster time to market.

What this means to developers is that the JavaScript world continues to lead all others in the number of developers, and while older technologies like jQuery are still popular, clearly React and Angular are important and continue growing. The newcomer, Vue, is also becoming more and more popular.

Selecting Angular, React, or Vue

Angular versus React versus Vue — there are so many open-source tools. Add to that libraries like Backbone.js and a hundred others. How can developers update their knowledge of so many? Which one should they choose? To some extent this decision is choosing text editors: it’s a personal choice, it’s fiercely defended, and in the end each might actually work for you.

If your main concern is popularity so you don’t get boxed into learning a complicated, rich programming environment only to see support wither away, then React is clearly “winning” as the long-term trend line shows. But popularity is only one attribute in a long shopping list of important decision factors.

Long-term trend lines of various popular frameworks and librariesLong-term trend lines of various popular frameworks and libraries. (Source)

The post The Evolution of JavaScript Tooling: A Modern Developer’s Guide appeared first on SitePoint.

Understanding and Using rem Units in CSS

Dec 12, 2019


CSS units have been the subject of several articles here on SitePoint (such as A Look at Length Units in CSS, The New CSS3 Relative Font Sizing Units, and The Power of em Units in CSS). In this article, we increase the count by having an in-depth look at rem units, which have excellent browser support and a polyfill if you need support for old IE.

This article was updated in December, 2019 to reflect the current state of rem unit sizing with CSS. For more on CSS font and text properties, read our book, CSS Master, 2nd Edition.

What Are rem Units?

You might have encountered the term “R.E.M.” before while listening to the radio or your music player. Unlike their musical counterparts, named for the “Rapid Eye Movement” during deep sleep, in CSS rem stands for “root em”. They won’t make you lose your religion nor believe in a man on the moon. What they can do is help you achieve a harmonious and balanced design.

According to the W3C spec the definition for one rem unit is:

Equal to the computed value of font-size on the root element. When specified on the font-size property of the root element, the rem units refer to the property’s initial value.

This means that 1rem equals the font size of the html element (which for most browsers has a default value of 16px).

Rem Units vs. Em Units

The main problem with em units is that they are relative to the font size of their own element. As such they can cascade and cause unexpected results. Let’s consider the following example, where we want lists to have a font size of 12px, in the case where the root font size is the default 16px:

[code language="css"]
html {
font-size: 100%;

ul {
font-size: 0.75em;

If we have a list nested inside another list, the font size of the inner list will be 75% of the size of its parent (in this case 9px). We can still overcome this problem by using something along these lines:

[code language="css"]
ul ul {
font-size: 1em;

This does the trick, however we still have to pay a lot of attention to situations where nesting gets even deeper.

With rem units, things are a simpler:

[code language="css"]
html {
font-size: 100%;

ul {
font-size: 0.75rem;

As all the sizes are referenced from the root font size, there is no more need to cover the nesting cases in separate declarations.

Font Sizing with Rem Units

One of the pioneers of using rem units for font sizing is Jonathan Snook with his Font sizing with REM article, back in May, 2011. Like many other CSS developers, he had to face the problems that em units bring in complex layouts.

At that time, older versions of IE still had large market shares and they were unable to zoom text that was sized with pixels. However, as we saw earlier, it is very easy to lose track of nesting and get unexpected results with em units.

The main issue with using rem for font sizing is that the values are somewhat difficult to use. Let’s see an example of some common font sizes expressed in rem units, assuming, of course, that the base size is 16px:

10px = 0.625rem 12px = 0.75rem 14px = 0.875rem 16px = 1rem (base) 18px = 1.125rem 20px = 1.25rem 24px = 1.5rem 30px = 1.875rem 32px = 2rem

As we can see, these values are not very convenient for making calculations. For this reason, Snook used a trick called “62.5%“. It was not a new discovery, by any means, as it was already used with em units:

[code language="css"]
body { font-size:62.5%; } /* =10px */
h1 { font-size: 2.4em; } /* =24px */
p { font-size: 1.4em; } /* =14px */
li { font-size: 1.4em; } /* =14px? */

As rem units are relative to the root element, Snook’s variant of the solution becomes:

[code language="css"]
html { font-size: 62.5%; } /* =10px */
body { font-size: 1.4rem; } /* =14px */
h1 { font-size: 2.4rem; } /* =24px */

One also had to take into account the other browsers that didn’t support rem. Thus the code from above would have actually been written this way:

[code language="css"]
html {
font-size: 62.5%;

body {
font-size: 14px;
font-size: 1.4rem;

h1 {
font-size: 24px;
font-size: 2.4rem;

While this solution seems to be close to the status of a “golden rule”, there are people who advise against using it blindingly. Harry Roberts writes his own take on the use of rem units. In his opinion, while the 62.5% solution makes calculation easier (as the font sizes in px are 10 times their rem values), it ends up forcing developers to explicitly rewrite all the font sizes in their website.

The post Understanding and Using rem Units in CSS appeared first on SitePoint.

How We Can Solve the Cryptocurrency Energy Usage Problem

Dec 10, 2019


Cryptocurrencies and Energy Usage: Problems and Solutions

Bitcoin is still the most important cryptocurrency people know about, and it serves as the entry point of the crypto space. However, every innovative project has to pay its price. For Bitcoin, it is its high carbon footprint created by mining.

Bitcoin mining works by solving cryptographic puzzles, also referred to Proof of Work (PoW). The miner that’s first to find the solution receives a Bitcoin reward. However, this race towards finding the solution comes with high energy usage, as it’s a resource-intensive process requiring a lot of electricity.

Currently, Bitcoin mining uses 58.93 TWh per year. An online tool by the University of Cambridge showed that Bitcoin uses as much energy as the whole of Switzerland. More important is the carbon footprint of Bitcoin. The electricity generated for powering the Bitcoin network equals 22 megatons of CO2 on a yearly basis. You can compare this carbon footprint with the footprint of a city like Kansas City (US).

This article will cover the following topics:

how the amount of energy consumed by each blockchain project differs depending on the implemented consensus algorithm possible solutions for the high energy usage of Bitcoin the effect of the Bitcoin network using a lot of excess and green energy.

To get started, let’s discuss if Bitcoin’s energy usage really is a problem?

Are We Thinking the Wrong Way about Bitcoin’s Energy Usage?

Let’s take a moment to think about where the energy for Bitcoin mining comes from. It’s worth questioning if the electricity the Bitcoin nodes use does harm the environment?

Many countries have an excess of electricity, especially when it comes to green energy solutions. The energy coming from green solutions like wind farms or solar plants is often hard to store or sell when the supply outweighs demand. This is true for many countries, especially China, which is responsible for 70 percent of the world’s Bitcoin mining.

As Bitcoin mining requires a lot of energy, node operators look for countries with cheap electricity prices. Reuters reported that “wasted [Chinese] wind power amounted to around 12 percent of total generation in 2017”. This means that node operators often end up in countries with an excess of energy. In those countries, Bitcoin mining plays an important role in neutralizing the energy market. Besides that, without Bitcoin mining, this excess of electricity is otherwise wasted.

Is it safe to say that Bitcoin does not contribute to environmental CO2 production? No, it does contribute for sure. However, the energy usage and CO2 pollution we think Bitcoin is responsible for is actually much lower.

Think about making a credit card payment. Every time you pull out your credit card to make a transaction, you also contribute to environmental pollution. You are not aware of the gigantic server plants of up to 100,000 square-feet to store and process all your transactions. Not to mention other things like their offices, payment terminals, or bank vaults.

It’s easy to attack Bitcoin for its energy usage. Therefore, it’s important to know that there is also an enormous hidden energy usage behind the VISA network. On the other side, the Bitcoin network only processes 100 million transactions per year, whereas the financial industry reaches up to 500 billion transactions per year.

The post How We Can Solve the Cryptocurrency Energy Usage Problem appeared first on SitePoint.

It’s Time to Start Making Your Web Apps Reactive

Dec 10, 2019


It's Time to Start Making Your Web Apps Reactive

This article was created in partnership with Manning Publications. Thank you for supporting the partners who make SitePoint possible.

You’ve heard of the principle of “survival of the fittest”, and you know that it’s especially true in web development. Your users expect split-second performance and bug-free interfaces — and if you can’t deliver them, you can be sure they’ll go straight to a competitor who can. But when it comes to survival, it’s important to remember the full principal of evolution: the best way to thrive is to be adaptable to change.

That’s where reactive programming comes in. Reactive applications are created to be adaptable to their environments by design. Right from the start, you’re building something made to react to load, react to failure, and react to your users. Whatever being deployed to production throws at your application, reactive programming will mean it can handle it.

How does reactive programming achieve this? It embeds sound programming principles into your application right from the very beginning.

Reactive Applications Are Message-driven

In reactive programming, data is pushed, not pulled. Rather than making requests of data that may or may not be available, client recipients await the arrival of messages with instructions only when data is ready. The designs of sender and recipient aren’t affected by how you propagate your messages, so you can design your system in isolation without needing to worry about how messages are transmitted. This also means that data recipients are only consuming resources when they’re active, rather than bogging down your application with requests for unavailable data.

Reactive Applications Are Elastic

Reactive applications are designed to elastically scale up or scale down, based on the amount of workload they have to deal with. A reactive system can both increase or decrease the resources it gives to its inputs, working without bottlenecks or contention points to more easily shard components and then distribute resources among them. Not only does this save you money on unused computing power, but even more importantly, it means that your application can easily service spikes in user activity.

Reactive Applications Are Responsive

Reactive applications must react to their users, and to their users' behavior. It’s essential that the system responds in a timely manner, not only for improved user experience, but so that problems can be quickly identified and (hopefully!) solved. With rapid response times and a consistent quality of service, you’ll find that your application has simpler error handling as well as much greater user confidence.

Reactive Applications Are Resilient

Reactive applications need to respond, adapt, and be flexible in the face of failure. Because a system can fail at any time, reactive applications are designed to boost resiliency through distribution. If there's a single point of failure, it’s just that — singular. The rest of your reactive application keeps running, because it’s been built to work without relying on any one part.

Further Resources

Reactive programming can be challenging to master. Fortunately, there’s lots of resources to help you out. Some of the best are the books and videos of Manning Publications, publishers of the highest quality tech books and videos you can buy today.

Exploring Modern Web Development is a 100% free guide to the most common tools for reactive programming. With this well-rounded sampler, you’ll have a firm foundation for developing awesome web apps with all the modern reactive features and functions today’s users expect.

SitePoint users can get 40% off top Manning reactive programming and web development books and videos with the coupon code NLSITEPOINT40. Check out popular bestsellers here.

The post It’s Time to Start Making Your Web Apps Reactive appeared first on SitePoint.

The Real Future of Remote Work is Asynchronous

Dec 5, 2019


I’ve been working remotely for over a decade – well before the days of tools like Slack or Zoom. In some ways, it was easier back then: you worked from wherever you were and had the space to manage your workload however you wanted. If you desired to go hardcore creative mode at night, sleep in, then leisurely read fiction over brunch, you could.

Now, in the age of the “green dot” or “presence prison,” as Jason Fried calls it, working remotely can be more suffocating than in-person work. The freedom that we worked hard to create — escaping the 9-to-5 — has now turned into constant monitoring, with the expectation that we are on, accessible, productive, and communicative 24/7.

I see this in job positions for remote roles. Companies frequently champion remote, proudly advertising their flexible cultures to only then list that candidates must be based within 60 minutes of Pacific Time Zone, that the hours are set, and standup is at 8:30am daily. One of the benefits of remote work is that it brings the world closer together and creates a level-playing field for the world’s best talent. Whether you were in Bengaluru or Berlin, you could still work with a VC-backed, cash-rich startup in San Francisco earning a solid hourly rate. If remote slowly turns into a way of working in real-time with frequent face-time, we will see less of this.

And let’s not forget trust: the crux of remote culture. Companies create tools that automatically record your screen at intervals to show management or clients you’re delivering. I founded a freelance marketplace called CloudPeeps and not recording your screen, as Upwork does, is one way we attract a different caliber of indie professional.

You can have more freedom in an office. From my beige cubicle at one of my first roles, I witnessed a colleague plan a wedding over the course of many months, including numerous calls to vendors and 20 tabs open for research. Most of the team was none the wiser – this wouldn’t be the case with remote today.

At the heart of this friction is the demand for real-time, synchronous communication. If we champion asynchronous as the heart of remote, what does the future of remote look like?

The post The Real Future of Remote Work is Asynchronous appeared first on SitePoint.

7 Ways Developers Can Contribute to Climate Action

Dec 4, 2019


7 Ways Developers Can Contribute to Climate Action

Whether you’ve just started out as a software engineer or you’ve been at it for decades, you too can play a role in helping to positively impact climate.

When people first consider this, they tend to think about the impact writing efficient code will have. Of course, you should always write efficient, elegant code. But unless the code you’re creating is going to be used by millions of people, it may not be where you can have the biggest impact from a climate perspective. (Code being used by millions or billions of people is probably highly optimized anyway!)

In this article, we'll look at seven other ways you can help.

Choose Where You Spend Your Career

Being an engineer means you have one of the most sought after, transferable occupations on the planet. In virtually any city in the world, you'll be in demand and probably well paid, so you have plenty of options. Choosing to work in a place that's at the intersection of your cares and your code is one of the easiest ways you can have an impact. Engineering is also one of the few careers where the job can be done remotely, and there's a growing list of companies focused on hiring people to work remotely.

Find Time to Contribute to Open-source Projects

Open source enables us all to benefit from a collective effort and shared knowledge, so the benefits are already clear. But what you may not be aware of is the mass of open-source projects specifically targeted at helping the environment. Open source also powers some of the biggest sites on the Internet, so you may also find your code being used at that billions-of-people scale mentioned earlier. While it's easy to find projects you can work on via a quick Google search, this article highlights a few.

Apply Your Skills to Non-profits

A lot of the work being done to combat or deal with the impacts of climate change are being done by the non-profit sector, and the one thing the non-profit sector always has is a lack of capital and a lack of talent. When people think of volunteering, they tend to think of painting a shed or handing out food at a shelter, but you can potentially create a bigger and more lasting impact by applying your skills and experience.

I worked with a non-profit to help design, set up and configure Salesforce's (free for nonprofits) service, so they could run more efficiently and at a higher scale. Hour for hour this was the best way I could help them to have a bigger impact.

Influence the Way the Product is Designed

With the rise of agile, squads (pioneered by Spotify) and cross-functional teams generally, the dynamic within the team has changed. Engineers now have a seat at the table to drive what the software does, how it works and even the end-customer problems it solves. This means as an engineer you can either walk into the room and be told what is being built or you can stand up and help drive that outcome, by considering the climate change impact of a design decision. A great example of this might be to set default shipping options to a lower impact option in an eCommerce site, or Google maps defaulting to a walking option vs a driving option.

The post 7 Ways Developers Can Contribute to Climate Action appeared first on SitePoint.

How to Divert Traffic Using IP2Location in a Next.js Website

Dec 4, 2019


How to Divert Traffic Using IP2Location in a Next.js Website

This article was created in partnership with IP2Location. Thank you for supporting the partners who make SitePoint possible.

In a world where online commerce has become the norm, we need to build websites that are faster, user friendly and more secure than ever. In this article, you’ll learn how to set up a Node.js powered website that’s capable of directing traffic to relevant landing pages based on a visitor's country. You'll also learn how to block anonymous traffic (e.g. Tor) in order to eliminate risks coming from such networks.

In order to implement these features, we'll be using the IP2Proxy web service provided by IP2Location, a Geo IP solutions provider. The web service is a REST API that accepts an IP address and responds with geolocation data in JSON format.

ip2location website

Here are some of the fields that we'll receive:

countryName cityName isProxy proxyType etc.

We'll use Next.js to build a website containing the following landing pages:

Home Page: API fetching and redirection will trigger from this page Landing Page: supported countries will see the product page in their local currency Unavailable Page: other countries will see this page with an option to join a waiting list Abuse Page: visitors using Tor networks will be taken to this page

Now that you're aware of the project plan, let's see what you need to get started.


On your machine, I would highly recommend the following:

Latest LTS version of Node.js (v12) Yarn

An older version of Node.js will do, but the most recent LTS (long-term support) version contains performance and debugging improvements in the area of async code, which we'll be dealing with. Yarn isn't necessary, but you'll benefit from its faster performance if you use it.

I’m also going to assume you have a good foundation in:

React React Hooks

As mentioned earlier, we'll be using Next.js to build our website. If you're new to it, you can follow their official interactive tutorial to quickly get up to speed.

IP2Location + Next.js Project Walkthrough Project Setup

To set up the project, simply launch the terminal and navigate to your workspace. Execute the following command:

npx create-next-app

Feel free to give your app any name. I've called mine next-ip2location-example. After installation is complete, navigate to the project's root and execute yarn dev. This will launch the Node.js dev server. If you open your browser and navigate to localhost:3000, you should see a page with the header “Welcome to Next.js”. This should confirm that we have a working app that runs without errors. Stop the app and install the following dependencies:

yarn add yarn add next-compose-plugins dotenv-load next-env @zeit/next-css bulma isomorphic-unfetch

We'll be using Bulma CSS framework to add out-of-the-box styling for our site. Since we'll be connecting to an API service, we'll set up an .env file to store our API key. Do note that this file should not be stored in a repository. Next create the file next.config.js. at the root of the project and add the following code:

const withPlugins = require('next-compose-plugins') const css = require('@zeit/next-css') const nextEnv = require('next-env') const dotenvLoad = require('dotenv-load') dotenvLoad() module.exports = withPlugins([ nextEnv(), [css] ])

The above configuration allows our application to read the .env file and load values. Do note that the keys will need to have the prefix NEXT_SERVER_ in order to be loaded in the server environment. Visit the next-env package page for more information. We'll set the API key in the next section. The above configuration also gives our Next.js app the capability to pre-process CSS code via the zeit/next-css package. This will allow us to use Bulma CSS framework in our application. Do note we'll need import Bulma CSS code into our Next.js application. I'll soon show you where to do this.

Obtaining API Key for the I2Proxy Web Service

As mentioned earlier, we'll need to convert a visitor's IP address into information we can use to redirect or block traffic. Simply head to the following link and sign up for a free trial key:

IP2Proxy Detection Web Service

ip2proxy trial key packages

Once you sign up, you'll receive the free API key via email. Create an .env file and place it at the root of your project folder. Copy your API key to the file as follows:


This free key will give you 1,000 free credits. At a minimum, we'll need the following fields for our application to function:

countryName proxyType

If you look at the pricing section on the IP2Proxy page, you'll note that the PX2 package will give us the required response. This means each query will costs us two credits. Below is a sample of how the URL should be constructed:

You can also submit the URL query without the IP. The service will use the IP address of the machine that sent the request. We can also use the PX8 package to get all the available fields such as isp and domain in the top-most package of the IP2Proxy Detection Web Service.

In the next section, we'll build a simple state management system for storing the proxy data which will be shared among all site pages.

Building Context API in Next.js

Create the file context/proxy-context and insert the following code:

import React, { useState, useEffect, useRef, createContext } from 'react' export const ProxyContext = createContext() export const ProxyContextProvider = (props) => { const initialState = { ipAddress: '', countryName: 'Nowhere', isProxy: false, proxyType: '' } // Declare shareable proxy state const [proxy, setProxy] = useState(initialState) const prev = useRef() // Read and Write Proxy State to Local Storage useEffect(() => { if (proxy.countryName == 'Nowhere') { const localState = JSON.parse(localStorage.getItem('ip2proxy')) if (localState) {'reading local storage') prev.current = localState.ipAddress setProxy(localState) } } else if (prev.current !== proxy.ipAddress) {'writing local storage') localStorage.setItem('ip2proxy', JSON.stringify(proxy)) } }, [proxy]) return( <ProxyContext.Provider value={[ipLocation, setProxy]}> {props.children} </ProxyContext.Provider> ) }

Basically, we’re declaring a sharable state called proxy that will store data retrieved from the IP2Proxy web service. The API fetch query will be implemented in pages/index.js. The information will be used to redirect visitors to the relevant pages. If the visitor tries to refresh the page, the saved state will be lost. To prevent this from happening, we're going to use the useEffect() hook to persist state in the browser's local storage. When a user refreshes a particular landing page, the proxy state will be retrieved from the local storage, so there's no need to perform the query again. Here's a quick sneak peek of Chrome's local storage in action:

chrome local storage

Tip: In case you run into problems further down this tutorial, clearing local storage can help resolve some issues.

Displaying Proxy Information

Create the file components/proxy-view.js and add the following code:

import React, { useContext } from 'react' import { ProxyContext } from '../context/proxy-context' const style = { padding: 12 } const ProxyView = () => { const [proxy] = useContext(ProxyContext) const { ipAddress, countryName, isProxy, proxyType } = proxy return ( <div className="box center" style={style}> <div className="content"> <ul> <li>IP Address : {ipAddress} </li> <li>Country : {countryName} </li> <li>Proxy : {isProxy} </li> <li>Proxy Type: {proxyType} </li> </ul> </div> </div> ) } export default ProxyView

This is simply a display component that we'll place at the end of each page. We're only creating this to confirm that our fetch logic and application's state is working as expected. You should note that the line const [proxy] = useContext(ProxyContext) won't run until we've declared our Context Provider at the root of our application. Let's do that now in the next section.

Implementing Context API Provider in Next.js App

Create the file pages/_app.js and add the following code:

import React from 'react' import App from 'next/app' import 'bulma/css/bulma.css' import { ProxyContextProvider } from '../context/proxy-context' export default class MyApp extends App { render() { const { Component, pageProps } = this.props return ( <ProxyContextProvider> <Component {...pageProps} /> </ProxyContextProvider> ) } }

The _app.js file is the root component of our Next.js application where we can share global state with the rest of the site pages and child components. Note that this is also where we're importing CSS for the Bulma framework we installed earlier. With that set up, let's now build a layout that we'll use for all our site pages.

The post How to Divert Traffic Using IP2Location in a Next.js Website appeared first on SitePoint.

10 Zsh Tips & Tricks: Configuration, Customization & Usage

Dec 3, 2019


As web developers, the command line is becoming an ever more important part of our workflow. We use it to install packages from npm, to test API endpoints, to push commits to GitHub, and lots more besides.

My shell of choice is zsh. It is a highly customizable Unix shell, that packs some very powerful features such as killer tab completion, clever history, remote file expansion, and much more.

In this article I'll show you how to install zsh, then offer ten tips and tricks to make you more productive when working with it.

This is a beginner-level guide which can be followed by anybody (even Windows users, thanks to Windows Subsystem for Linux). However, in light of Apple's announcement that zsh is now the standard shell on macOS Catalina, mac users might find it especially helpful.

Let's get started.


I don't want to offer in-depth installation instructions for each operating system, rather some general guidelines instead. If you get stuck installing zsh, there is plenty of help available online.

At the time of writing the current zsh version is 5.7.1.


Most versions of macOS ship with zsh pre-installed. You can check if this is the case and if so, which version you are running using the command: zsh --version. If the version is 4.3.9 or higher, you should be good to go (we'll need at least this version to install Oh My Zsh later on). If not, you can follow this guide to install a more recent version of zsh using homebrew.

Once installed, you can set zsh as the default shell using: chsh -s $(which zsh). After issuing this command, you'll need to log out, then log back in again for the changes to take effect.

If at any point you decide you don't like zsh, you can revert to Bash using: chsh -s $(which bash).


On Ubuntu-based distros, you can install zsh using: sudo apt-get install zsh. Once the installation completes, you can check the version using zsh --version, then make zsh your default shell using chsh -s $(which zsh). You'll need to log out, then log back in for the changes to take effect.

As with macOS, you can revert back to Bash using: chsh -s $(which bash).

If you are running a non-Ubuntu based distro, then check out the instructions for other distros.


Unfortunately, this is where things start to get a little complicated. Zsh is a Unix shell and for it to work on Windows, you'll need to activate Windows Subsystem for Linux (WSL), an environment in Windows 10 for running Linux binaries.

There are various tutorials online explaining how to get up and running with zsh in Window 10s. I found these two to be up-to-date and easy to follow:

How to Install and Use the Linux Bash Shell on Windows 10 - follow this one first to install WSL How to Use Zsh (or Another Shell) in Windows 10 - follow this one second to install zsh

Note that it is also possible to get zsh running with Cygwin. Here are instructions for doing that.

First Run

When you first open zsh, you'll be greeted by the following menu.

The post 10 Zsh Tips & Tricks: Configuration, Customization & Usage appeared first on SitePoint.

Building a Habit Tracker with Prisma 2, Chakra UI, and React

Dec 2, 2019


Building a Habit Tracker with Prisma, Chakra UI, and React

In June 2019, Prisma 2 Preview was released. Prisma 1 changed the way we interact with databases. We could access databases through plain JavaScript methods and objects without having to write the query in the database language itself. Prisma 1 acted as an abstraction in front of the database so it was easier to make CRUD (create, read, update and delete) applications.

Prisma 1 architecture looked like this:

Prisma 1 architecture

Notice that there’s an additional Prisma server required for the back end to access the database. The latest version doesn’t require an additional server. It's called The Prisma Framework (formerly known as Prisma 2) which is a complete rewrite of Prisma. The original Prisma was written in Scala, so it had to be run through JVM and needed an additional server to run. It also had memory issues.

The Prisma Framework is written in Rust so the memory footprint is low. Also, the additional server required while using Prisma 1 is now bundled with the back end, so you can use it just like a library.

The Prisma Framework consists of three standalone tools:

Photon: a type-safe and auto-generated database client ("ORM replacement") Lift: a declarative migration system with custom workflows Studio: a database IDE that provides an Admin UI to support various database workflows.

Prisma 2 architecture

Photon is a type-safe database client that replaces traditional ORMs, and Lift allows us to create data models declaratively and perform database migrations. Studio allows us to perform database operations through a beautiful Admin UI.

Why use Prisma?

Prisma removes the complexity of writing complex database queries and simplifies database access in the application. By using Prisma, you can change the underlying databases without having to change each and every query. It just works. Currently, it only supports mySQL, SQLite and PostgreSQL.

Prisma provides type-safe database access provided by an auto-generated Prisma client. It has a simple and powerful API for working with relational data and transactions. It allows visual data management with Prisma Studio.

Providing end-to-end type-safety means developers can have confidence in their code, thanks to static analysis and compile-time error checks. The developer experience increases drastically when having clearly defined data types. Type definitions are the foundation for IDE features — like intelligent auto-completion or jump-to-definition.

Prisma unifies access to multiple databases at once (coming soon) and therefore drastically reduces complexity in cross-database workflows (coming soon).

It provides automatic database migrations (optional) through Lift, based on a declarative datamodel expressed using GraphQL's schema definition language (SDL).


For this tutorial, you need a basic knowledge of React. You also need to understand React Hooks.

Since this tutorial is primarily focused on Prisma, it’s assumed that you already have a working knowledge of React and its basic concepts.

If you don’t have a working knowledge of the above content, don't worry. There are tons of tutorials available that will prepare you for following this post.

Throughout the course of this tutorial, we’ll be using yarn. If you don’t have yarn already installed, install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node v12.11.1 npm v6.11.3 npx v6.11.3 yarn v1.19.1 prisma2 v2.0.0-preview016.2 react v16.11.0 Folder Structure

Our folder structure will be as follows:

streaks-app/ client/ server/

The client/ folder will be bootstrapped from create-react-app while the server/ folder will be bootstrapped from prisma2 CLI.

So you just need to create a root folder called streaks-app/ and the subfolders will be generated while scaffolding it with the respective CLIs. Go ahead and create the streaks-app/ folder and cd into it as follows:

$ mkdir streaks-app && cd $_ The Back End (Server Side) Bootstrap a new Prisma 2 project

You can bootstrap a new Prisma 2 project by using the npx command as follows:

$ npx prisma2 init server

Alternatively, you can install prisma2 CLI globally and run the init command. The do the following:

$ yarn global add prisma2 // or npm install --global prisma2 $ prisma2 init server Run the interactive prisma2 init flow & select boilerplate

Select the following in the interactive prompts:

Select Starter Kit Select JavaScript Select GraphQL API Select SQLite

Once terminated, the init command will have created an initial project setup in the server/ folder.

Now open the schema.prisma file and replace it with the following:

generator photon { provider = "photonjs" } datasource db { provider = "sqlite" url = "file:dev.db" } model Habit { id String @default(cuid()) @id name String @unique streak Int }

schema.prisma contains the data model as well as the configuration options.

Here, we specify that we want to connect to the SQLite datasource called dev.db as well as target code generators like photonjs generator.

Then we define the data model Habit, which consists of id, name and streak.

id is a primary key of type String with a default value of cuid().

name is of type String, but with a constraint that it must be unique.

streak is of type Int.

The seed.js file should look like this:

const { Photon } = require('@generated/photon') const photon = new Photon() async function main() { const workout = await photon.habits.create({ data: { name: 'Workout', streak: 49, }, }) const running = await photon.habits.create({ data: { name: 'Running', streak: 245, }, }) const cycling = await photon.habits.create({ data: { name: 'Cycling', streak: 77, }, }) const meditation = await photon.habits.create({ data: { name: 'Meditation', streak: 60, }, }) console.log({ workout, running, cycling, meditation, }) } main() .catch(e => console.error(e)) .finally(async () => { await photon.disconnect() })

This file creates all kinds of new habits and adds it to the SQLite database.

Now go inside the src/index.js file and remove its contents. We'll start adding content from scratch.

First go ahead and import the necessary packages and declare some constants:

const { GraphQLServer } = require('graphql-yoga') const { makeSchema, objectType, queryType, mutationType, idArg, stringArg, } = require('nexus') const { Photon } = require('@generated/photon') const { nexusPrismaPlugin } = require('nexus-prisma')

Now let’s declare our Habit model just below it:

const Habit = objectType({ name: 'Habit', definition(t) { t.model.streak() }, })

We make use of objectType from the nexus package to declare Habit.

The name parameter should be the same as defined in the schema.prisma file.

The definition function lets you expose a particular set of fields wherever Habit is referenced. Here, we expose id, name and streak field.

If we expose only the id and name fields, only those two will get exposed wherever Habit is referenced.

Below that, paste the Query constant:

const Query = queryType({ definition(t) { t.crud.habit() t.crud.habits() // t.list.field('habits', { // type: 'Habit', // resolve: (_, _args, ctx) => { // return ctx.photon.habits.findMany() // }, // }) }, })

We make use of queryType from the nexus package to declare Query.

The Photon generator generates an API that exposes CRUD functions on the Habit model. This is what allows us to expose t.crud.habit() and t.crud.habits() method.

t.crud.habit() allows us to query any individual habit by its id or by its name. t.crud.habits() simply returns all the habits.

Alternatively, t.crud.habits() can also be written as:

t.list.field('habits', { type: 'Habit', resolve: (_, _args, ctx) => { return ctx.photon.habits.findMany() }, })

Both the above code and t.crud.habits() will give the same results.

In the above code, we make a field named habits. The return type is Habit. We then call ctx.photon.habits.findMany() to get all the habits from our SQLite database.

Note that the name of the habits property is auto-generated using the pluralize package. It's therefore recommended practice to name our models singular — that is, Habit and not Habits.

We use the findMany method on habits, which returns a list of objects. We find all the habits as we have mentioned no condition inside of findMany. You can learn more about how to add conditions inside of findMany here.

Below Query, paste Mutation as follows:

const Mutation = mutationType({ definition(t) { t.crud.createOneHabit({ alias: 'createHabit' }) t.crud.deleteOneHabit({ alias: 'deleteHabit' }) t.field('incrementStreak', { type: 'Habit', args: { name: stringArg(), }, resolve: async (_, { name }, ctx) => { const habit = await ctx.photon.habits.findOne({ where: { name, }, }) return ctx.photon.habits.update({ data: { streak: habit.streak + 1, }, where: { name, }, }) }, }) }, })

Mutation uses mutationType from the nexus package.

The CRUD API here exposes createOneHabit and deleteOneHabit.

createOneHabit, as the name suggests, creates a habit whereas deleteOneHabit deletes a habit.

createOneHabit is aliased as createHabit, so while calling the mutation we call createHabit rather than calling createOneHabit.

Similarly, we call deleteHabit instead of deleteOneHabit.

Finally, we create a field named incrementStreak, which increments the streak of a habit. The return type is Habit. It takes an argument name as specified in the args field of type String. This argument is received in the resolve function as the second argument. We find the habit by calling ctx.photon.habits.findOne() while passing in the name parameter in the where clause. We need this to get our current streak. Then finally we update the habit by incrementing the streak by 1.

Below Mutation, paste the following:

const photon = new Photon() new GraphQLServer({ schema: makeSchema({ types: [Query, Mutation, Habit], plugins: [nexusPrismaPlugin()], }), context: { photon }, }).start(() => console.log( `🚀 Server ready at: http://localhost:4000\n⭐️ See sample queries:`, ), ) module.exports = { Habit }

We use the makeSchema method from the nexus package to combine our model Habit, and add Query and Mutation to the types array. We also add nexusPrismaPlugin to our plugins array. Finally, we start our server at localhost:4000. Port 4000 is the default port for graphql-yoga. You can change the port as suggested here.

Let's start the server now. But first, we need to make sure our latest schema changes are written to the node_modules/@generated/photon directory. This happens when you run prisma2 generate.

If you haven't installed prisma2 globally, you'll have to replace prisma2 generate with ./node_modules/.bin/prisma2 generate. Then we need to migrate our database to create tables.

The post Building a Habit Tracker with Prisma 2, Chakra UI, and React appeared first on SitePoint.

Black Friday 2019 for Designers and Developers

Nov 29, 2019


Black Friday deals for designers and developers 2019

This article was created in partnership with Mekanism. Thank you for supporting the partners who make SitePoint possible.

Black Friday is one of the best opportunities of the year to get all kinds of new stuff, including digital web tools and services. Some companies are offering huge discounts to heavily increase their sales, while others already have excellent offers for their customers and partners.

In this article, you’ll find free and premium web tools and services, and also some of the best Black Friday WordPress deals. We included website builders, UI Kits, Admins themes, WordPress themes, effective logo and brand identity creators, and much more. There’s a web tool or service for everyone in this showcase of 38 excellent solutions.

Let’s start.

1. Free and Premium Bootstrap 4 Admin Themes and UI Kits


DashboardPack is one of the main suppliers of free and premium Bootstrap 4 admin themes and UI kits, being used by tens of thousands of people with great success. Here you’ll find free and premium themes, made with great attention to detail — HTML5 themes, React themes, Angular themes, and Vue themes.

On the DashboardPack website there’s a dedicated section of Freebies. Here there are four gorgeous dashboard themes (HTML, Angular, Vue, and React) that you can see as a live demo and use for free.
Between November 29 and December 3, you have 50% discount for all templates and all license types (Personal, Developer, and Lifetime). Use this coupon code: MADBF50.

2. Total Theme

Total Theme

Total Theme is a super powerful and complete WordPress theme that is flexible, easy to use and customize. It has brilliant designs included, and other cool stuff.

With over 38k happy users, Total Theme is a popular WordPress theme. It comes loaded with over 80 builder modules, over 40 premade demos that can be installed with 1-click, 500 styling options, and a friendly and lightning-fast interface.

The premade demos cover niches like Business, One Page, Portfolio, Personal, Creative, Shop, Blog, Photography, and more. Total Theme will help you achieve pretty much any goal — from scratch using the included Visual Page Builder, or by editing a demo design.

A limited-time 50% off Total Theme offer is valid from November 26 2019 (12pm AEDT) through December 3 2019 (8pm AEDT). Discount already applied.

3. Tailor Brands

Tailor Brands

Imagine if your dream business idea had a name, a face, and branded documents that made it official. With Tailor Brands’ online logo maker and design tools, you can instantly turn that dream idea into a living, breathing company! Design a logo in 30 seconds, customize it to your liking, and put it on everything — from professional business cards to online presentations.

Tailor Brand’s mission is to be the biggest branding agency powered by AI. It’s a huge goal but it is achievable, and they already have a top position on this ladder.

Designing a logo with Tailor Brands is super simple and you don’t need any special skills or previous experience to get a top logo design. You write the logo name you like, add a tagline (optional step), indicate which industry is your logo is for, choose if you want an icon-, name- or initial-based logo, choose from left and right (you’ll get designs as examples), and the powerful AI will present you plenty of logo designs to choose from. It’s super simple and straightforward.

Go ahead and design a logo with Tailor Brands.

4. Freelance Taxes

Bonsai Freelance Taxes

Bonsai is the integrated suite of products used by the world’s best creative freelancers.

With the latest addition of freelance taxes to the product lineup, Bonsai is more prepared than ever to help with everything your freelance business needs.

Be prepared for tax season and spend just seconds getting an overview of what you owe in annual or quarterly taxes.

Bonsai’s freelance tax software looks at your expenses, automatically categorizes them, and highlights which are deductible and to what percentage.

All Bonsai products are deeply integrated with each other to ensure it can fit every work style. Other features you should know about include contracts, proposals, time-tracking, and invoicing.

Start your free trial of Bonsai today and be ready for your freelance taxes ahead of time!

5. Codester


Codester is a huge marketplace where web designers and developers can find thousands of premium scripts, codes, app templates, themes (of all kinds), plugins, graphics, and much more. Always check the Flash Sale section where hugely discounted items are being sold.

6. Mobile App Testing


With over eight years of experience, this App and Browser Testing service is powerful, easy to use and provides you with a big number of features tailored to help you improve your product. Use TestingBot for automated web and app testing, for live web and app testing, for visual testing, and much more.

Start a free, 14-day trial, no credit card required.

7. FunctionFox


The leading choice for creative professionals, FunctionFox gives you simple yet powerful time-tracking and project-management tools that allow you to keep multiple projects on track, forecast workloads, reduce communication breakdowns and stay on top of deadlines through project scheduling, task-based assignments, internal communication tools and comprehensive reporting. Don't let deadlines and due dates slip past!

Try a free demo today at FunctionFox.

8. Taskade: Simple Tasks, Notes, Chat


Taskade is a unified workspace where you can chat, write, and get work done with your team. Edit projects in real time. Chat and video conference on the same page. Keep track of tasks across multiple teams and workspaces. Plan, manage, and visualize projects. And much more.

With Taskade, you can build your own workspace templates. You can start from a blank page or you can choose between a Weekly Planner, Meeting Agenda, Project Board, Mindmap, and more (you'll find lots of templates to start with). Everything you need can be fully configured to be a perfect fit.

9. Live Chat Software

Live Chat Software

AppyPie is a professional and super-easy-to-use Live Chat solution that will help you reach out to your clients and offer them real-time responsive and support through your website and mobile, using the platform live chat software.

This is a brilliant way to quickly increase conversions, make more sales (you can answer questions from people that want to buy), and increase the level of happiness of your customers. (Whatever problem they may have, they know that you're there to help fast.)

Request an invite to test the platform.

10. Mobirise Website Builder


Mobirise is arguably the best website builder in 2019, which you can use to create fast, responsive, and Google-friendly websites in minutes, with zero coding, and only drag-and-drop.

This brilliant builder is loaded with over 2,000 awesome website templates to start with, with eCommerce and Shopping Cart, sliders, galleries, forms, popups, icons, and much more.

In this period there is a 94% discount, so take it.

11. Newsletter Templates

Newsletter Templates

MailMunch is a powerful drag-and-drop builder that's loaded with tons of beautiful, pre-designed newsletter templates, with advanced features like Template Blocks and a Media Library to make the workflow even smoother, and a lot more. There's no coding required to use MailMunch.

Start boosting your conversions with MailMunch.

12. Astra Theme: Elementor Templates


Elementor is the most powerful website builder on the market, being used by millions of people with great success. To get out of the crowd, you can supercharge Elementor with 100+ free and premium templates, by using this bundle.

Free to use.

13. Schema Pro

Schema Pro

Creating a schema markup is no longer a task! With a simple click and select interface you can set up a markup in minutes. All the markup configurations you will set are automatically applied to all selected pages and posts.

Get Schema Pro and outperform your competitors in search engines.

14. Rank Math SEO

Rank Math SEO

Rank Math is the most powerful and easy-to-use WordPress SEO plugin on the market, making your website rank higher in search engines in no time. After a quick installation and setup, Rank Math SEO does the whole the job with no supervision.

The post Black Friday 2019 for Designers and Developers appeared first on SitePoint.

Delay, Sleep, Pause, & Wait in JavaScript

Nov 28, 2019


Timing Issues in JavaScript: Implementing a Sleep Function

Many programming languages have a sleep function that will delay a program's execution for a given number of seconds. This functionality is absent from JavaScript, however, owing to its asynchronous nature. In this article, we'll look briefly at why this might be, then how we can implement a sleep function ourselves.

Understanding JavaScript's Execution Model

Before we get going, it's important to make sure we understand JavaScript's execution model correctly.

Consider the following Ruby code:

require 'net/http' require 'json' url = '' uri = URI(url) response = JSON.parse(Net::HTTP.get(uri)) puts response['public_repos'] puts "Hello!"

As one might expect, this code makes a request to the GitHub API to fetch my user data. It then parses the response, outputs the number of public repos attributed to my GitHub account and finally prints "Hello!" to the screen. Execution goes from top to bottom.

Contrast that with the equivalent JavaScript version:

fetch('') .then(res => res.json()) .then(json => console.log(json.public_repos)); console.log("Hello!");

If you run this code, it will output "Hello!" to the screen, then the number of public repos attributed to my GitHub account.

This is because fetching data from an API is an asynchronous operation in JavaScript. The JavaScript interpreter will encounter the fetch command and dispatch the request. It will not, however, wait for the request to complete. Rather, it will continue on its way, output "Hello!" to the console, then when the request returns a couple of hundred milliseconds later, it will output the number of repos.

If any of this is news to you, you should watch this excellent conference talk: What the heck is the event loop anyway?.

You Might Not Actually Need a Sleep Function

Now that we have a better understanding of JavaScript's execution model, let's have a look at how JavaScript handles delays and asynchronous operations.

Create a Simple Delay Using setTimeout

The standard way of creating a delay in JavaScript is to use its setTimeout method. For example:

console.log("Hello"); setTimeout(() => { console.log("World!"); }, 2000);

This would log "Hello" to the console, then after two seconds "World!" And in many cases, this is enough: do something, wait, then do something else. Sorted!

However, please be aware that setTimeout is an asynchronous method. Try altering the previous code like so:

console.log("Hello"); setTimeout(() => { console.log("World!"); }, 2000); console.log("Goodbye!");

It will log:

Hello Goodbye! World! Waiting for Things with setTimeout

It's also possible to use setTimeout (or its cousin setInterval) to keep JavaScript waiting until a condition is met. For example, here's how you might use setTimeout to wait for a certain element to appear on a web page:

function pollDOM () { const el = document.querySelector('my-element'); if (el.length) { // Do something with el } else { setTimeout(pollDOM, 300); // try again in 300 milliseconds } } pollDOM();

This assumes the element will turn up at some point. If you're not sure that's the case, you'll need to look at canceling the timer (using clearTimeout or clearInterval).

If you'd like to find out more about JavaScript's setTimeout method, please consult our tutorial which has plenty of examples to get you going.

The post Delay, Sleep, Pause, & Wait in JavaScript appeared first on SitePoint.

Understanding module.exports and exports in Node.js

Nov 27, 2019


Working with Modules in Node.js

In programming, modules are self-contained units of functionality that can be shared and reused across projects. They make our lives as developers easier, as we can use them to augment our applications with functionality that we haven't had to write ourselves. They also allow us to organize and decouple our code, leading to applications that are easier to understand, debug and maintain.

In this article, I'll examine how to work with modules in Node.js, focusing on how to export and consume them.

Different Module Formats

As JavaScript originally had no concept of modules, a variety of competing formats have emerged over time. Here's a list of the main ones to be aware of:

The Asynchronous Module Definition (AMD) format is used in browsers and uses a define function to define modules. The CommonJS (CJS) format is used in Node.js and uses require and module.exports to define dependencies and modules. The npm ecosystem is built upon this format. The ES Module (ESM) format. As of ES6 (ES2015), JavaScript supports a native module format. It uses an export keyword to export a module's public API and an import keyword to import it. The System.register format was designed to support ES6 modules within ES5. The Universal Module Definition (UMD) format can be used both in the browser and in Node.js. It's useful when a module needs to be imported by a number of different module loaders.

Please be aware that this article deals solely with the CommonJS format, the standard in Node.js. If you'd like to read into any of the other formats, I recommend this article, by SitePoint author Jurgen Van de Moere.

Requiring a Module

Node.js comes with a set of built-in modules that we can use in our code without having to install them. To do this, we need to require the module using the require keyword and assign the result to a variable. This can then be used to invoke any methods the module exposes.

For example, to list out the contents of a directory, you can use the file system module and its readdir method:

const fs = require('fs'); const folderPath = '/home/jim/Desktop/'; fs.readdir(folderPath, (err, files) => { files.forEach(file => { console.log(file); }); });

Note that in CommonJS, modules are loaded synchronously and processed in the order they occur.

Creating and Exporting a Module

Now let's look at how to create our own module and export it for use elsewhere in our program. Start off by creating a user.js file and adding the following:

const getName = () => { return 'Jim'; }; exports.getName = getName;

Now create an index.js file in the same folder and add this:

const user = require('./user'); console.log(`User: ${user.getName()}`);

Run the program using node index.js and you should see the following output to the terminal:

User: Jim

So what has gone on here? Well, if you look at the user.js file, you'll notice that we're defining a getName function, then using the exports keyword to make it available for import elsewhere. Then in the index.js file, we're importing this function and executing it. Also notice that in the require statement, the module name is prefixed with ./, as it's a local file. Also note that there's no need to add the file extension.

Exporting Multiple Methods and Values

We can export multiple methods and values in the same way:

const getName = () => { return 'Jim'; }; const getLocation = () => { return 'Munich'; }; const dateOfBirth = '12.01.1982'; exports.getName = getName; exports.getLocation = getLocation; exports.dob = dateOfBirth;

And in index.js:

const user = require('./user'); console.log( `${user.getName()} lives in ${user.getLocation()} and was born on ${user.dob}.` );

The code above produces this:

Jim lives in Munich and was born on 12.01.1982.

Notice how the name we give the exported dateOfBirth variable can be anything we fancy (dob in this case). It doesn't have to be the same as the original variable name.

Variations in Syntax

I should also mention that it's possible to export methods and values as you go, not just at the end of the file.

For example:

exports.getName = () => { return 'Jim'; }; exports.getLocation = () => { return 'Munich'; }; exports.dob = '12.01.1982';

And thanks to destructuring assignment, we can cherry-pick what we want to import:

const { getName, dob } = require('./user'); console.log( `${getName()} was born on ${dob}.` );

As you might expect, this logs:

Jim was born on 12.01.1982.

The post Understanding module.exports and exports in Node.js appeared first on SitePoint.

Remote Work: Tips, Tricks and Best Practices for Success

Nov 25, 2019


Remote Work: Tips, Tricks and Best Practices for Success

There are lots of advantages to working away from the office, both for developers and for the companies that employ them. Think about avoiding the daily commute, the cost of office space, the cost of living in or traveling to the city for rural or international workers, the inconvenience of office work for differently abled people or those with unusual family or life responsibilities, and the inflexibility of trying to keep traditional 9–5 hours as more and more of our workforce adapts to the gig economy by taking on second jobs or part-time side hustles.

Remote work can help address many of these difficulties while improving team transparency and putting the focus of work back on the reasons you were hired for your job in the first place. It also opens up a world of possibilities for companies, including broader recruitment opportunities, improved worker transparency, lower infrastructure costs, and more scalable business models based on actual worker productivity.

But working from home or from a co-working space can also present new challenges, and learning how to recognize them and overcome them can make the difference between a productive, happy work experience and endless hours of misery, loneliness, and frustration.

Think I’m being overdramatic? Let me explain.

I’ve had the experience of being the remote worker who didn’t think he needed to pay attention to interpersonal office dynamics, or keep track of his time and accomplishments. I’ve worked long into the evening because I didn’t notice when the work day ended. I’ve struggled with inefficient tools that might have worked fine in an office environment, but proved woefully inadequate when it came to remote collaboration.

So I’ve learned to cope with these issues myself, and for years I’ve been coaching engineering teams by working on-site, remotely, and in various combinations of the two. Depending on your situation, there are a number of useful tools, tricks, and fundamental practices that can make your remote working experience so much better than it is today — for yourself, your team, your manager, and your company.

Remote Self-management

For better or for worse, most of us are used to having a manager decide what our working hours are, where we’re going to sit, what equipment we’re going to use, and whom we’re going to collaborate with. That’s a luxury that comes with the convenience of working together in a shared space, where management can supervise and coordinate our efforts. It may not always feel luxurious, but you may well find yourself missing the support of an attentive manager when you start working from home and realize you have to make these decisions for yourself.

Set a Schedule and Stick to It!

The first tip I offer for anyone starting out a remote role is to establish the hours you’re going to work, and stick to those hours.

It’s not as easy as it sounds. When you’re working from home, you won’t have all of the little cues that come with office life to tell you when to pause for lunch, when to take a break, and when to stop working for the day. Working from a co-working space or a coffee shop can help, but it’s not the same as having your colleagues around you to exert that not-so-subtle social pressure. What’s more, if you start to feel anxious about whether people at the office know how hard you’re working, you may find yourself wanting to compensate by putting in a few extra hours.

Some people find that it's easier to compartmentalize remote work by using a co-working space, simulating the effect of going out to work and then coming back at the end of the day. If you're working from home, your professional and personal lives can start to blend. You’re going to find yourself washing the dishes, feeding the cat, answering the telephone, and attending to all the other chores that crop up in your living space. And you know what? That’s just fine! … as long as it doesn't start to interfere with your productivity on the job.

Decide up front on your morning and afternoon work hours and respect them. Write them down somewhere you won’t forget to see them, so you can’t pretend you don’t know what they are. The same advice applies to teams working together in an office or people using co-working spaces, but it’s even more critical if you're working from home.

Let Everyone Know When and Where You'll Be Working

Building on the theme of scheduling, a remote worker needs to let anyone who works with them know how to get in touch, and may need to encourage that kind of contact regularly. Remote workers can feel isolated or even excluded — left out of important decisions because people at the office simply forgot about them. It's up to the person who’s working off site to make their existence known throughout the work day, and to advocate for visibility.

This can be easier said than done. One of the advantages of remote work is the ability to focus without interruption for extended periods. Sometimes just the knowledge that the bubble of isolation can be broken is enough to foster distraction and make it harder to concentrate. This can make the experience draining and unproductive, and negate most of the advantages.

It's not a bad idea to start off just using email to stay in touch with the team for typical group communications. And as a personal productivity tip, try to establish set times during the day to check that email — perhaps three or so over the course of a day. Checking your email constantly can establish a pattern of behavior that puts your attention at the mercy of anyone who wants to reach out to you for anything at any time. Email is asynchronous by nature, so use that to your advantage when you're working from home.

Apart from direct communication, it's good to get your team using a messaging tool such as Slack or HipChat. These services can run in the background on every team member's computer, or even on their mobile devices, providing a shared space for inter-team, intra-team, and cross-functional messaging. There are secure ways for companies to make services like these available for sensitive internal communications, and they can work both on site and off site, establishing virtual shared message boards to keep teams aligned.

The post Remote Work: Tips, Tricks and Best Practices for Success appeared first on SitePoint.

Create a Toggle Switch in React as a Reusable Component

Nov 21, 2019


Implementing a Toggle Switch in React JS as a Reusable Component

In this article, we're going to create an iOS-inspired toggle switch using React components. By the end, we'll have built a simple demo React App that uses our custom toggle switch component.

We could use third-party libraries for this, but building from scratch allows us to better understand how our code is working and allows us to customize our component completely.

Forms provide a major means for enabling user interactions. The checkbox is traditionally used for collecting binary data — such as yes or no, true or false, enable or disable, on or off, etc. Although some modern interface designs steer away from form fields when creating toggle switches, I'll stick with them here due to their greater accessibility.

Here's a screenshot of the component we'll be building:

The final result

Getting Started

We can start with a basic HTML checkbox input form element with its necessary properties set:

<input type="checkbox" name="name" id="id" />

To build around it, we might need an enclosing <div> with a class, a <label> and the <input /> control itself. Adding everything, we might get something like this:

<div class="toggle-switch"> <input type="checkbox" class="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label class="toggle-switch-label" for="toggleSwitch"> Toggle Me! </label> </div>

In time, we can get rid of the label text and use the <label> tag to check or uncheck the checkbox input control. Inside the <label>, let's add two <span>s that help us construct the switch holder and the toggling switch itself:

<div class="toggle-switch"> <input type="checkbox" class="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label class="toggle-switch-label" for="toggleSwitch"> <span class="toggle-switch-inner"></span> <span class="toggle-switch-switch"></span> </label> </div> Converting to a React Component

Now that we know what needs to go into the HTML, all we need to do is to convert the HTML into a React component. Let's start with a basic component here. We'll make this a class component, and then we'll convert it into hooks, as it's easier for new developers to follow state than useState:

import React, { Component } from "react"; class ToggleSwitch extends Component { render() { return ( <div className="toggle-switch"> <input type="checkbox" className="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label className="toggle-switch-label" htmlFor="toggleSwitch"> <span className="toggle-switch-inner" /> <span className="toggle-switch-switch" /> </label> </div> ); } } export default ToggleSwitch;

At this point, it's not possible to have multiple toggle switch sliders on the same view or same page due to the repetition of ids. We could leverage React's way of componentization here, but in this instance, we'll be using props to dynamically populate the values:

import React, { Component } from "react"; class ToggleSwitch extends Component { render() { return ( <div className="toggle-switch"> <input type="checkbox" className="toggle-switch-checkbox" name={this.props.Name} id={this.props.Name} /> <label className="toggle-switch-label" htmlFor={this.props.Name}> <span className="toggle-switch-inner" /> <span className="toggle-switch-switch" /> </label> </div> ); } } export default ToggleSwitch;

The this.props.Name will populate the values of id, name and for (note that it is htmlFor in React JS) dynamically, so that you can pass different values to the component and have multiple of them on the same page. Also, the <span> tag doesn't have an ending </span> tag; instead it's closed in the starting tag like <span />, and this is completely fine.

The post Create a Toggle Switch in React as a Reusable Component appeared first on SitePoint.

Compile-time Immutability in TypeScript

Nov 21, 2019


Compile-time Immutability in TypeScript

TypeScript allows us to decorate specification-compliant ECMAScript with type information that we can analyze and output as plain JavaScript using a dedicated compiler. In large-scale projects, this sort of static analysis can catch potential bugs ahead of resorting to lengthy debugging sessions, let alone deploying to production. However, reference types in TypeScript are still mutable, which can lead to unintended side effects in our software.

In this article, we'll look at possible constructs where prohibiting references from being mutated can be beneficial.

Primitives vs Reference Types

JavaScript defines two overarching groups of data types:

Primitives: low-level values that are immutable (e.g. strings, numbers, booleans etc.) References: collections of properties, representing identifiable heap memory, that are mutable (e.g. objects, arrays, Map etc.)

Say we declare a constant, to which we assign a string:

const message = 'hello';

Given that strings are primitives and are thus immutable, we’re unable to directly modify this value. It can only be used to produce new values:

console.log(message.replace('h', 'sm')); // 'smello' console.log(message); // 'hello'

Despite invoking replace() upon message, we aren't modifying its memory. We're merely creating a new string, leaving the original contents of message intact.

Mutating the indices of message is a no-op by default, but will throw a TypeError in strict mode:

'use strict'; const message = 'hello'; message[0] = 'j'; // TypeError: 0 is read-only

Note that if the declaration of message were to use the let keyword, we would be able to replace the value to which it resolves:

let message = 'hello'; message = 'goodbye';

It's important to highlight that this is not mutation. Instead, we're replacing one immutable value with another.

Mutable References

Let's contrast the behavior of primitives with references. Let's declare an object with a couple of properties:

const me = { name: 'James', age: 29, };

Given that JavaScript objects are mutable, we can change its existing properties and add new ones: = 'Rob'; me.isTall = true; console.log(me); // Object { name: "Rob", age: 29, isTall: true };

Unlike primitives, objects can be directly mutated without being replaced by a new reference. We can prove this by sharing a single object across two declarations:

const me = { name: 'James', age: 29, }; const rob = me; = 'Rob'; console.log(me); // { name: 'Rob', age: 29 }

JavaScript arrays, which inherit from Object.prototype, are also mutable:

const names = ['James', 'Sarah', 'Rob']; names[2] = 'Layla'; console.log(names); // Array(3) [ 'James', 'Sarah', 'Layla' ] What's the Issue with Mutable References?

Consider we have a mutable array of the first five Fibonacci numbers:

const fibonacci = [1, 2, 3, 5, 8]; log2(fibonacci); // replaces each item, n, with Math.log2(n); appendFibonacci(fibonacci, 5, 5); // appends the next five Fibonacci numbers to the input array

This code may seem innocuous on the surface, but since log2 mutates the array it receives, our fibonacci array will no longer exclusively represent Fibonacci numbers as the name would otherwise suggest. Instead, fibonacci would become [0, 1, 1.584962500721156, 2.321928094887362, 3, 13, 21, 34, 55, 89]. One could therefore argue that the names of these declarations are semantically inaccurate, making the flow of the program harder to follow.

Pseudo-immutable Objects in JavaScript

Although JavaScript objects are mutable, we can take advantage of particular constructs to deep clone references, namely spread syntax:

const me = { name: 'James', age: 29, address: { house: '123', street: 'Fake Street', town: 'Fakesville', country: 'United States', zip: 12345, }, }; const rob = {, name: 'Rob', address: {, house: '125', }, }; console.log(; // 'James' console.log(; // 'Rob' console.log(me === rob); // false

The spread syntax is also compatible with arrays:

const names = ['James', 'Sarah', 'Rob']; const newNames = [...names.slice(0, 2), 'Layla']; console.log(names); // Array(3) [ 'James', 'Sarah', 'Rob' ] console.log(newNames); // Array(3) [ 'James', 'Sarah', 'Layla' ] console.log(names === newNames); // false

Thinking immutably when dealing with reference types can make the behavior of our code clearer. Revisiting the prior mutable Fibonacci example, we could avoid such mutation by copying fibonacci into a new array:

const fibonacci = [1, 2, 3, 5, 8]; const log2Fibonacci = [...fibonacci]; log2(log2Fibonacci); appendFibonacci(fibonacci, 5, 5);

Rather than placing the burden of creating copies on the consumer, it would be preferable for log2 and appendFibonacci to treat their inputs as read-only, creating new outputs based upon them:

const PHI = 1.618033988749895; const log2 = (arr: number[]) => => Math.log2(2)); const fib = (n: number) => (PHI ** n - (-PHI) ** -n) / Math.sqrt(5); const createFibSequence = (start = 0, length = 5) => new Array(length).fill(0).map((_, i) => fib(start + i + 2)); const fibonacci = [1, 2, 3, 5, 8]; const log2Fibonacci = log2(fibonacci); const extendedFibSequence = [...fibonacci, ...createFibSequence(5, 5)];

By writing our functions to return new references in favor of mutating their inputs, the array identified by the fibonacci declaration remains unchanged, and its name remains a valid source of context. Ultimately, this code is more deterministic.

The post Compile-time Immutability in TypeScript appeared first on SitePoint.

Getting Started with Puppeteer

Nov 14, 2019


Getting Started with Puppeteer

Browser developer tools provide an amazing array of options for delving under the hood of websites and web apps. These capabilities can be further enhanced and automated by third-party tools. In this article, we'll look at Puppeteer, a Node-based library for use with Chrome/Chromium.

The puppeteer website describes Puppeteer as

a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

Puppeteer is made by the team behind Google Chrome, so you can be pretty sure it will be well maintained. It lets us perform common actions on the Chromium browser, programmatically through JavaScript, via a simple and easy-to-use API.

With Puppeteer, you can:

scrape websites generate screenshots of websites including SVG and Canvas create PDFs of websites crawl an SPA (single-page application) access web pages and extract information using the standard DOM API generate pre-rendered content — that is, server-side rendering automate form submission automate performance analysis automate UI testing like Cypress test chrome extensions

Puppeteer does nothing new that Selenium, PhantomJS (which is now deprecated), and the like do, but it provides a simple and easy-to-use API and provides a great abstraction so we don't have to worry about the nitty-gritty details when dealing with it.

It's also actively maintained so we get all the new features of ECMAScript as Chromium supports it.


For this tutorial, you need a basic knowledge of JavaScript, ES6+ and Node.js.

You must also have installed the latest version of Node.js.

We’ll be using yarn throughout this tutorial. If you don’t have yarn already installed, install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 12.12.0 yarn 1.19.1 puppeteer 2.0.0 Installation

To use Puppeteer in your project, run the following command in the terminal:

$ yarn add puppeteer

Note: when you install Puppeteer, it downloads a recent version of Chromium (~170MB macOS, ~282MB Linux, ~280MB Win) that is guaranteed to work with the API. To skip the download, see Environment variables.

If you don't need to download Chromium, then you can install puppeteer-core:

$ yarn add puppeteer-core

puppeteer-core is intended to be a lightweight version of Puppeteer for launching an existing browser installation or for connecting to a remote one. Be sure that the version of puppeteer-core you install is compatible with the browser you intend to connect to.

Note: puppeteer-core is only published from version 1.7.0.


Puppeteer requires at least Node v6.4.0, but we're going to use async/await, which is only supported in Node v7.6.0 or greater, so make sure to update your Node.js to the latest version to get all the goodies.

Let's dive into some practical examples using Puppeteer. In this tutorial, we'll be:

generating a screenshot of Unsplash using Puppeteer creating a PDF of Hacker News using Puppeteer signing in to Facebook using Puppeteer 1. Generate a Screenshot of Unsplash using Puppeteer

It's really easy to do this with Puppeteer. Go ahead and create a screenshot.js file in the root of your project. Then paste in the following code:

const puppeteer = require('puppeteer') const main = async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto('') await page.screenshot({ path: 'unsplash.png' }) await browser.close() } main()

Firstly, we require the puppeteer package. Then we call the launch method on it that initializes the instance. This method is asynchronous as it returns a Promise. So we await for it to get the browser instance.

Then we call newPage on it and go to Unsplash and take a screenshot of it and save the screenshot as unsplash.png.

Now go ahead and run the above code in the terminal by typing:

$ node screenshot

Unsplash - 800px x 600px resolution

Now after 5–10 seconds you'll see an unsplash.png file in your project that contains the screenshot of Unsplash. Notice that the viewport is set to 800px x 600px as Puppeteer sets this as the initial page size, which defines the screenshot size. The page size can be customized with Page.setViewport().

Let's change the viewport to be 1920px x 1080px. Insert the following code before the goto method:

await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1, })

Now go ahead and also change the filename from unsplash.png to unsplash2.png in the screenshot method like so:

await page.screenshot({ path: 'unsplash2.png' })

The whole screenshot.js file should now look like this:

const puppeteer = require('puppeteer') const main = async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1, }) await page.goto('') await page.screenshot({ path: 'unsplash2.png' }) await browser.close() } main()

Unsplash - 1920px x 1080px

The post Getting Started with Puppeteer appeared first on SitePoint.

Getting Started with the React Native Navigation Library

Nov 13, 2019


Getting Started with the React Native Navigation Library

One of the most important aspects of React Native app development is the navigation. It’s what allows users to get to the pages they’re looking for. That’s why it’s important to choose the best navigation library to suit your needs.

If your app has a lot of screens with relatively complex UI, it might be worth exploring React Native Navigation instead of React Navigation. This is because there will always be performance bottlenecks with React Navigation, since it works off the same JavaScript thread as the rest of the app. The more complex your UI, the more data has to be passed to that bridge, which can potentially slow it down.

In this tutorial, we’ll be looking at the React Native Navigation library by Wix, an alternative navigation library for those who are looking for a smoother navigation performance for their React Native apps.


Knowledge of React and React Native is required to follow this tutorial. Prior experience with a navigation library such as React Navigation is optional.

App Overview

In order to demonstrate how to use the library, we’ll be creating a simple app that uses it. The app will have five screens in total:

Initialization: this serves as the initial screen for the app. If the user is logged in, it will automatically navigate to the home screen. If not, the user is navigated to the login screen. Login: this allows the user to log in so they can view the home, gallery, and feed. To simplify things, the login will just be mocked; no actual authentication code will be involved. From this screen, the user can also go to the forgot-password screen. ForgotPassword: a filler screen, which asks for the user’s email address. This will simply be used to demonstrate stack navigation. Home: the initial screen that the user will see when they log in. From here, they can also navigate to either the gallery or feed screens via a bottom tab navigation. Gallery: a filler screen which shows a photo gallery UI. Feed: a filler screen which shows a news feed UI.

Here’s what the app will look like:

React Native Navigation demo gif

You can find the source code of the sample app on this GitHub repo.

Bootstrapping the App

Let’s start by generating a new React Native project:

react-native init RNNavigation --version react-native@0.57.8

Note: we’re using a slightly older version of React Native, because React Native Navigation doesn’t work well with later versions of React Native. React Native Navigation hasn’t really kept up with the changes in the core of React Native since version 0.58. The only version known to work flawlessly with React Native is the version we’re going to use. If you check the issues on their repo, you’ll see various issues on version 0.58 and 0.59. There might be workarounds on those two versions, but the safest bet is still version 0.57.

As for React Native version 0.60, the core team has made a lot of changes. One of them is the migration to AndroidX, which aims to make it clearer which packages are bundled with the Android operating system. This essentially means that if a native module uses any of the old packages that got migrated to the new androidx.* package hierarchy, it will break. There are tools such as jetifier, which allows for migration to AndroidX. But this doesn’t ensure React Native Navigation will work.

Next, install the dependencies of the app:

react-native-navigation — the navigation library that we’re going to use. @react-native-community/async-storage — for saving data to the app’s local storage. react-native-vector-icons — for showing icons for the bottom tab navigation. yarn add react-native-navigation @react-native-community/async-storage react-native-vector-icons

In the next few sections, we’ll be setting up the packages we just installed.

Setting up React Native Navigation

First, we’ll set up the React Native Navigation library. The instructions that we’ll be covering here are also in the official documentation. Unfortunately, it’s not written in a very friendly way for beginners, so we’ll be covering it in more detail.

Note: the demo project includes an Android and iOS folders as well. You can use those as a reference if you encounter any issues with setting things up.

Since the name of the library is very long, I’ll simply refer to it as RNN from now on.

Android Setup

In this section, we’ll take a look at how you can set up RNN for Android. Before you proceed, it’s important to update all the SDK packages to the latest versions. You can do that via the Android SDK Manager.


Add the following to your android/settings.gradle file:

include ':react-native-navigation' project(':react-native-navigation').projectDir = new File(rootProject.projectDir, '../node_modules/react-native-navigation/lib/android/app/') Gradle Wrapper Properties

In your android/gradle/wrapper/, update Gradle’s distributionUrl to use version 4.4 if it’s not already using it:

distributionUrl=https\:// build.gradle

Next, in your android/build.gradle file, add mavenLocal() and mavenCentral() under buildscript -> repositories:

buildscript { repositories { google() jcenter() // add these: mavenLocal() mavenCentral() } }

Next, update the classpath under the buildscript -> dependencies to point out to the Gradle version that we need:

buildscript { repositories { ... } dependencies { classpath '' } }

Under allprojects -> repositories, add mavenCentral() and JitPack. This allows us to pull the data from React Native Navigation’s JitPack repository:

allprojects { allprojects { repositories { mavenLocal() google() jcenter() mavenCentral() // add this maven { url '' } // add this } }

Next, add the global config for setting the build tools and SDK versions for Android:

allprojects { ... } ext { buildToolsVersion = "27.0.3" minSdkVersion = 19 compileSdkVersion = 26 targetSdkVersion = 26 supportLibVersion = "26.1.0" }

Lastly, we’d still want to keep the default react-native run-android command when compiling the app, so we have to set Gradle to ignore other flavors of React Native Navigation except the one we’re currently using (reactNative57_5). Ignoring them ensures that we only compile the specific version we’re depending on:

ext { ... } subprojects { subproject -> afterEvaluate { if ((subproject.plugins.hasPlugin('android') || subproject.plugins.hasPlugin('android-library'))) { android { variantFilter { variant -> def names = variant.flavors*.name if (names.contains("reactNative51") || names.contains("reactNative55") || names.contains("reactNative56") || names.contains("reactNative57")) { setIgnore(true) } } } } } }

Note: there are four other flavors of RNN that currently exist. These are the ones we’re ignoring above:

reactNative51 reactNative55 reactNative56 reactNative57 android/app/build.gradle

On your android/app/build.gradle file, under android -> compileOptions, make sure that the source and target compatibility version is 1.8:

android { defaultConfig { ... } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } }

Then, in your dependencies, include react-native-navigation as a dependency:

dependencies { implementation fileTree(dir: "libs", include: ["*.jar"]) implementation "${rootProject.ext.supportLibVersion}" implementation "com.facebook.react:react-native:+" implementation project(':react-native-navigation') // add this }

Lastly, under android -> defaultConfig, set the missingDimensionStrategy to reactNative57_5. This is the version of RNN that’s compatible with React Native 0.57.8:

defaultConfig { applicationId "com.rnnavigation" minSdkVersion rootProject.ext.minSdkVersion targetSdkVersion rootProject.ext.targetSdkVersion missingDimensionStrategy "RNN.reactNativeVersion", "reactNative57_5" // add this versionCode 1 versionName "1.0" ndk { abiFilters "armeabi-v7a", "x86" } }

The post Getting Started with the React Native Navigation Library appeared first on SitePoint.

How TypeScript Makes You a Better JavaScript Developer

Nov 12, 2019



What do Airbnb, Google, Lyft and Asana have in common? They've all migrated several codebases to TypeScript.

Whether it is eating healthier, exercising, or sleeping more, our humans love self-improvement. The same applies to our careers. If someone shared tips for improving as a programmer, your ears would perk.

In this article, the goal is to be that someone. We know TypeScript will make you a better JavaScript developer for several reasons. You'll feel confident when writing code. Fewer errors will appear in your production code. It will be easier to refactor code. You'll write fewer tests (yay!). And overall, you'll have a better coding experience in your editor.

What Even Is TypeScript?

TypeScript is a compiled language. You write TypeScript and it compiles to JavaScript. Essentially, you're writing JavaScript, but with a type system. JavaScript developers should have a seamless transition because the languages are the same, except for a few quirks.

Here's a basic example of a function in both JavaScript and TypeScript:

function helloFromSitePoint(name) { return `Hello, ${name} from SitePoint!` } function helloFromSitePoint(name: string) { return `Hello, ${name} from SitePoint!` }

Notice how the two are almost identical. The difference is the type annotation on the "name" parameter in TypeScript. This tells the compiler, "Hey, make sure when someone calls this function, they only pass in a string." We won't go into much depth but this example should illustrate the bare minimal of TypeScript.

How Will TypeScript Make Me Better?

TypeScript will improve your skills as a JavaScript developer by:

giving you more confidence, catching errors before they hit production, making it easier to refactor code, saving you time from writing tests, providing you with a better coding experience.

Let's explore each of these a bit deeper.

The post How TypeScript Makes You a Better JavaScript Developer appeared first on SitePoint.

Face Detection and Recognition with Keras

Nov 7, 2019


Face Detection and Recognition with Keras

If you're a regular user of Google Photos, you may have noticed how the application automatically extracts and groups faces of people from the photos that you back up to the cloud.

Face Recognition in the Google Photos web applicationFace Recognition in the Google Photos web application

A photo application such as Google's achieves this through the detection of faces of humans (and pets too!) in your photos and by then grouping similar faces together. Detection and then classification of faces in images is a common task in deep learning with neural networks.

In the first step of this tutorial, we'll use a pre-trained MTCNN model in Keras to detect faces in images. Once we've extracted the faces from an image, we'll compute a similarity score between these faces to find if they belong to the same person.


Before you start with detecting and recognizing faces, you need to set up your development environment. First, you need to "read" images through Python before doing any processing on them. We'll use the plotting library matplotlib to read and manipulate images. Install the latest version through the installer pip:

pip3 install matplotlib

To use any implementation of a CNN algorithm, you need to install keras. Download and install the latest version using the command below:

pip3 install keras

The algorithm that we'll use for face detection is MTCNN (Multi-Task Convoluted Neural Networks), based on the paper Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (Zhang et al., 2016). An implementation of the MTCNN algorithm for TensorFlow in Python3.4 is available as a package. Run the following command to install the package through pip:

pip3 install mtcnn

To compare faces after extracting them from images, we'll use the VGGFace2 algorithm developed by the Visual Geometry Group at the University of Oxford. A TensorFlow-based Keras implementation of the VGG algorithm is available as a package for you to install:

pip3 install keras_vggface

While you may feel the need to build and train your own model, you'd need a huge training dataset and vast processing power. Since this tutorial focuses on the utility of these models, it uses existing, trained models by experts in the field.

Now that you've successfully installed the prerequisites, let's jump right into the tutorial!

Step 1: Face Detection with the MTCNN Model

The objectives in this step are as follows:

retrieve images hosted externally to a local server read images through matplotlib's imread() function detect and explore faces through the MTCNN algorithm extract faces from an image. 1.1 Store External Images

You may often be doing an analysis from images hosted on external servers. For this example, we'll use two images of Lee Iacocca, the father of the Mustang, hosted on the BBC and The Detroit News sites.

To temporarily store the images locally for our analysis, we'll retrieve each from its URL and write it to a local file. Let's define a function store_image for this purpose:

import urllib.request def store_image(url, local_file_name): with urllib.request.urlopen(url) as resource: with open(local_file_name, 'wb') as f: f.write(

You can now simply call the function with the URL and the local file in which you'd like to store the image:

store_image('', 'iacocca_1.jpg') store_image('', 'iacocca_2.jpg')

After successfully retrieving the images, let's detect faces in them.

1.2 Detect Faces in an Image

For this purpose, we'll make two imports — matplotlib for reading images, and mtcnn for detecting faces within the images:

from matplotlib import pyplot as plt from mtcnn.mtcnn import MTCNN

Use the imread() function to read an image:

image = plt.imread('iacocca_1.jpg')

Next, initialize an MTCNN() object into the detector variable and use the .detect_faces() method to detect the faces in an image. Let's see what it returns:

detector = MTCNN() faces = detector.detect_faces(image) for face in faces: print(face)

For every face, a Python dictionary is returned, which contains three keys. The box key contains the boundary of the face within the image. It has four values: x- and y- coordinates of the top left vertex, width, and height of the rectangle containing the face. The other keys are confidence and keypoints. The keypoints key contains a dictionary containing the features of a face that were detected, along with their coordinates:

{'box': [160, 40, 35, 44], 'confidence': 0.9999798536300659, 'keypoints': {'left_eye': (172, 57), 'right_eye': (188, 57), 'nose': (182, 64), 'mouth_left': (173, 73), 'mouth_right': (187, 73)}} 1.3 Highlight Faces in an Image

Now that we've successfully detected a face, let's draw a rectangle over it to highlight the face within the image to verify if the detection was correct.

To draw a rectangle, import the Rectangle object from matplotlib.patches:

from matplotlib.patches import Rectangle

Let's define a function highlight_faces to first display the image and then draw rectangles over faces that were detected. First, read the image through imread() and plot it through imshow(). For each face that was detected, draw a rectangle using the Rectangle() class.

Finally, display the image and the rectangles using the .show() method. If you're using Jupyter notebooks, you may use the %matplotlib inline magic command to show plots inline:

def highlight_faces(image_path, faces): # display image image = plt.imread(image_path) plt.imshow(image) ax = plt.gca() # for each face, draw a rectangle based on coordinates for face in faces: x, y, width, height = face['box'] face_border = Rectangle((x, y), width, height, fill=False, color='red') ax.add_patch(face_border)

Let's now display the image and the detected face using the highlight_faces() function:

highlight_faces('iacocca_1.jpg', faces) Detected face in an image of Lee IacoccaDetected face in an image of Lee Iacocca. Source: BBC

Let's display the second image and the face(s) detected in it:

image = plt.imread('iacocca_2.jpg') faces = detector.detect_faces(image) highlight_faces('iacocca_2.jpg', faces) The Detroit NewsThe Detroit News

In these two images, you can see that the MTCNN algorithm correctly detects faces. Let's now extract this face from the image to perform further analysis on it.

1.4 Extract Face for Further Analysis

At this point, you know the coordinates of the faces from the detector. Extracting the faces is a fairly easy task using list indices. However, the VGGFace2 algorithm that we use needs the faces to be resized to 224 x 224 pixels. We'll use the PIL library to resize the images.

The function extract_face_from_image() extracts all faces from an image:

from numpy import asarray from PIL import Image def extract_face_from_image(image_path, required_size=(224, 224)): # load image and detect faces image = plt.imread(image_path) detector = MTCNN() faces = detector.detect_faces(image) face_images = [] for face in faces: # extract the bounding box from the requested face x1, y1, width, height = face['box'] x2, y2 = x1 + width, y1 + height # extract the face face_boundary = image[y1:y2, x1:x2] # resize pixels to the model size face_image = Image.fromarray(face_boundary) face_image = face_image.resize(required_size) face_array = asarray(face_image) face_images.append(face_array) return face_images extracted_face = extract_face_from_image('iacocca_1.jpg') # Display the first face from the extracted faces plt.imshow(extracted_face[0])

Here is how the extracted face looks from the first image.

Extracted and resized face from first imageExtracted and resized face from first image

The post Face Detection and Recognition with Keras appeared first on SitePoint.

React Native End-to-end Testing and Automation with Detox

Nov 6, 2019


Introducing Detox, a React Native End-to-end Testing and Automation Framework

Detox is an end-to-end testing and automation framework that runs on a device or a simulator, just like an actual end user.

Software development demands fast responses to user and/or market needs. This fast development cycle can result (sooner or later) in parts of a project being broken, especially when the project grows so large. Developers get overwhelmed with all the technical complexities of the project, and even the business people start to find it hard to keep track of all scenarios the product caters for.

In this scenario, there’s a need for software to keep on top of the project and allow us to deploy with confidence. But why end-to-end testing? Aren’t unit testing and integration testing enough? And why bother with the complexity that comes with end-to-end testing?

First of all, the complexity issue has been tackled by most of the end-to-end frameworks, to the extent that some tools (whether free, paid or limited) allow us to record the test as a user, then replay it and generate the necessary code. Of course, that doesn’t cover the full range of scenarios that you’d be able to address programmatically, but it’s still a very handy feature.

Want to learn React Native from the ground up? This article is an extract from our Premium library. Get an entire collection of React Native books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month.

End-to-end Integration and Unit Testing

End-to-end testing versus integration testing versus unit testing: I always find the word “versus” drives people to take camps — as if it’s a war between good and evil. That drives us to take camps instead of learning from each other and understanding the why instead of the how. The examples are countless: Angular versus React, React versus Angular versus Vue, and even more, React versus Angular versus Vue versus Svelte. Each camp trash talks the other.

jQuery made me a better developer by taking advantage of the facade pattern $('') to tame the wild DOM beast and keep my mind on the task at hand. Angular made me a better developer by taking advantage of componentizing the reusable parts into directives that can be composed (v1). React made me a better developer by taking advantage of functional programming, immutability, identity reference comparison, and the level of composability that I don’t find in other frameworks. Vue made me a better developer by taking advantage of reactive programming and the push model. I could go on and on, but I’m just trying to demonstrate the point that we need to concentrate more on the why: why this tool was created in the first place, what problems it solves, and whether there are other ways of solving the same problems.

As You Go Up, You Gain More Confidence

end-to-end testing graph that demonstrates the benefit of end-to-end testing and the confidence it brings

As you go more on the spectrum of simulating the user journey, you have to do more work to simulate the user interaction with the product. But on the other hand, you get the most confidence because you’re testing the real product that the user interacts with. So, you catch all the issues—whether it’s a styling issue that could cause a whole section or a whole interaction process to be invisible or non interactive, a content issue, a UI issue, an API issue, a server issue, or a database issue. You get all of this covered, which gives you the most confidence.

Why Detox?

We discussed the benefit of end-to-end testing to begin with and its value in providing the most confidence when deploying new features or fixing issues. But why Detox in particular? At the time of writing, it’s the most popular library for end-to-end testing in React Native and the one that has the most active community. On top of that, it’s the one React Native recommends in its documentation.

The Detox testing philosophy is “gray-box testing”. Gray-box testing is testing where the framework knows about the internals of the product it’s testing.In other words, it knows it’s in React Native and knows how to start up the application as a child of the Detox process and how to reload it if needed after each test. So each test result is independent of the others.

Prerequisites macOS High Sierra 10.13 or above Xcode 10.1 or above


/usr/bin/ruby -e "$(curl -fsSL"

Node 8.3.0 or above:

brew update && brew install node

Apple Simulator Utilities: brew tap wix/brew and brew install applesimutils

Detox CLI 10.0.7 or above:

npm install -g detox-cli See the Result in Action

First, let’s clone a very interesting open-source React Native project for the sake of learning, then add Detox to it:

git clone cd movie-swiper-detox-testing npm install react-native run-ios

Create an account on The Movie DB website to be able to test all the application scenarios. Then add your username and password in .env file with usernamePlaceholder and passwordPlaceholder respectively:

isTesting=true username=usernamePlaceholder password=passwordPlaceholder

After that, you can now run the tests:

detox test

Note that I had to fork this repo from the original one as there were a lot of breaking changes between detox-cli, detox, and the project libraries. Use the following steps as a basis for what to do:

Migrate it completely to latest React Native project. Update all the libraries to fix issues faced by Detox when testing. Toggle animations and infinite timers if the environment is testing. Add the test suite package. Setup for New Projects Add Detox to Our Dependencies

Go to your project’s root directory and add Detox:

npm install detox --save-dev Configure Detox

Open the package.json file and add the following right after the project name config. Be sure to replace movieSwiper in the iOS config with the name of your app. Here we’re telling Detox where to find the binary app and the command to build it. (This is optional. We can always execute react-native run-ios instead.) Also choose which type of simulator: ios.simulator, ios.none, android.emulator, or android.attached. And choose which device to test on:

{ "name": "movie-swiper-detox-testing", // add these: "detox": { "configurations": { "ios.sim.debug": { "binaryPath": "ios/build/movieSwiper/Build/Products/Debug-iphonesimulator/", "build": "xcodebuild -project ios/movieSwiper.xcodeproj -scheme movieSwiper -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build", "type": "ios.simulator", "name": "iPhone 7 Plus" } } } }

Here’s a breakdown of what the config above does:

Execute react-native run-ios to create the binary app. Search for the binary app at the root of the project: find . -name "*.app". Put the result in the build directory.

Before firing up the test suite, make sure the device name you specified is available (for example, iPhone 7). You can do that from the terminal by executing the following:

xcrun simctl list

Here’s what it looks like:


Now that weve added Detox to our project and told it which simulator to start the application with, we need a test runner to manage the assertions and the reporting—whether it’s on the terminal or otherwise.

Detox supports both Jest and Mocha. We’ll go with Jest, as it has bigger community and bigger feature set. In addition to that, it supports parallel test execution, which could be handy to speed up the end-to-end tests as they grow in number.

Adding Jest to Dev Dependencies

Execute the following to install Jest:

npm install jest jest-cli --save-dev

The post React Native End-to-end Testing and Automation with Detox appeared first on SitePoint.

How to Build Your First Amazon Alexa Skill

Nov 5, 2019


How to Build Your First Amazon Alexa Skill

Out of the box, Alexa supports a number of built-in skills, such as adding items to your shopping list or requesting a song. However, developers can build new custom skills by using the Alexa Skill Kit (ASK).

The ASK, a collection of APIs and tools, handles the hard work related to voice interfaces, including speech recognition, text-to-speech encoding, and natural language processing. ASK helps developers build skills quickly and easily.

In short, the sole reason that Alexa can understand a user’s voice commands is that it has skills defined. Every Alexa skill is a piece of software designed to understand voice commands. Also, each Alexa skill has its own logic defined that creates an appropriate response for the voice command. To give you an idea of some existing Alexa skills, they include:

ordering pizza at Domino's Pizza calling for an Uber telling you your horoscope

So as said, we can develop our own custom skills fitted to our need with the Alexa Skill Kit, a collection of APIs and tools designed for this purpose. The ASK includes tools like speech recognition, text-to-speech encoding, and natural language processing. The kit should get any developer started quickly with developing their own custom skill.

In this article, you’ll learn how to create a basic "get a fact" Alexa skill. In short, we can ask Alexa to present us with a random cat fact. The complete code for completing our task can be found on GitHub. Before we get started, let's make sure we understand the Alexa skill terminology.

Mastering Alexa Skill Terminology

First, let's learn how a user can interact with a custom skill. This will be important for understanding the different concepts related to skills.

In order to activate a particular skill, the user has to call Alexa and ask to open a skill. For example: "Alexa, open cat fact". By doing this, we're calling the invocation name of the skill. Basically, the invocation name can be seem as the name of the application.

Now that we've started the right skill, we have access to the voice intents/commands the skill understands. As we want to keep things simple, we define a "Get Cat Fact" intent. However, we need to provide sample sentences to trigger the intent. An intent can be triggered by many example sentences, also called utterances. For example, a user might say "Give a fact". Therefore, we define the following example sentences:

"Tell a fact" "Give a cat fact" "Give a fact"

It's even possible to combine the invocation name with an intent like this: "Alexa, ask Cat Fact to give a fact".

Now that we know the difference between an invocation name and intent, let's move on to creating your first Alexa skill.

Creating an Amazon Developer Account

To get started, we need an Amazon Developer Account. If you have one, you can skip this section.

Signing up for an Amazon Developer account is a three-step process. Amazon requires some personal information, accepting the terms of service, and providing a payment method. The advantage of signing up for an Amazon Developer account is that you get access to a plethora of other Amazon services. Once the signup has been successfully completed, you'll see the Amazon Developer dashboard.

Log yourself in to the dashboard and click on the Developer Console button in the top-right corner.

Open Developer Console

Next up, we want to open the Alexa Skills Kit.

Open Alexa Skills Kit

If you were unable to open the Alexa Skills Kit, use this link.

In the following section, we'll create our actual skill.

Creating Our First Custom Alexa Skill

Okay, we're set to create our first custom Alexa skill. Click the blue button Create Skill to open up the menu for creating a new skill.

Create Skill Button

Firstly, it will prompt us for the name of our skill. As you already know, we want random cat facts and therefore call the skill custom cat fact (we can't use cat fact as that's a built-in skill for Alexa devices). Next, it prompts us to pick a model for your skill. We can choose between some predefined models or go for a custom model that gives us full flexibility. As we don't want to be dealing with code we don't need, we go for the Custom option.

Note: If you choose a predefined skill, you get a list of interaction models and example sentences (utterances). However, even the custom skill is equipped with the most basic intents like Cancel, Help, NavigateHome, and Stop.

Pick Skill name

Next, we need to pick a way to host our skill. Again, we don't want to overcomplicate things and pick the Alexa-Hosted (Node.js) option. This means we don't have to run a back end ourselves that requires some effort to make it "Alexa compliant". It means you have to format the response according to the Amazon Alexa standards for a device to understand this. The Alexa-hosted option will:

host skills in your account up to the AWS Free Tier limits and get you started with a Node.js template. You will gain access to an AWS Lambda endpoint, 5 GB of media storage with 15 GB of monthly data transfer, and a table for session persistence.

Pick host method

Okay, now that all settings are in place, you can click the Create Skill button in the top-right corner of the screen. This button will generate the actual skill in our Amazon Developer account.

Modifying Your First Alexa Skill

Now if you navigate to the Alexa Developer Console, you'll find your skill listed there. Click the edit button to start modifying the skill.

Modify Alexa Skill

Next, Amazon will display the build tab for the Cat Fact skill. On the left-hand side, you'll find a list of intents that are defined for the skill. As said before, by default the Alexa Skills Kit generates a Cancel, Stop, Help, and NavigateHome intent. The first three are helpful for a user that wants to quit the skill or doesn't know how to use it. The last one, NavigateHome, is only used for complex skills that involve multiple steps.

Interaction model

Step 1: Verify Invocation Name

First of all, let's verify if the invocation name for the skill is correct. The name should say "custom cat fact".

In case you change the name, make sure to hit the Save Model button on top of the page.

Invocation name

The post How to Build Your First Amazon Alexa Skill appeared first on SitePoint.

How to Build a Web App with GraphQL and React

Nov 1, 2019


In this tutorial, we'll learn to build a web application with React and GraphQL. We'll consume an API available from graphql-pokemon and serve it from this link, which allows you to get information about Pokémon.

GraphQL is a query language for APIs and a runtime for fulfilling those queries created by Facebook. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

In this tutorial, we'll only learn the front end of a GraphQL application that makes use of Apollo for fetching data from a ready GraphQL API hosted on the web.

Let's get started with the prerequisites!


There are a few prerequisites for this tutorial:

recent versions of Node.js and npm installed on your system knowledge of JavaScript/ES6 familiarity with React

If you don't have Node and npm installed on your development machine, you can simply download the binaries for your system from the official website. You can also use NVM, a POSIX-compliant bash script to manage multiple active Node.js versions.

Installing create-react-app

Let's install the create-react-app tool that allows you to quickly initialize and work with React projects.

Open a new terminal and run the following command:

npm install -g create-react-app

Note: You may need to use sudo before your command in Linux and macOS or use a command prompt with administrator rights if you get EACCESS errors when installing the package globally on your machine. You can also simply fix your npm permissions.

At the time of writing, this installs create-react-app v3.1.1.

Creating a React Project

Now we're ready to create our React project.

Go back to your terminal and run the following command:

create-react-app react-pokemon

Next, navigate into your project's folder and start the local development server:

cd react-pokemon npm start

Go to http://localhost:3000 in your web browser to see your app up and running.

This is a screenshot of the app at this point:

The current state of our app

Installing Apollo Client

Apollo Client is a complete data management solution that's commonly used with React, but can be used with any other library or framework.

Apollo provides intelligent caching that enables it to be a single source of truth for the local and remote data in your application.

You'll need to install the following packages in your React project to work with Apollo:

graphql: the JavaScript reference implementation for GraphQL apollo-client: a fully-featured caching GraphQL client with integrations for React, Angular, and more apollo-cache-inmemory: the recommended cache implementation for Apollo Client 2.0 apollo-link-http: the most common Apollo Link, a system of modular components for GraphQL networking react-apollo: this package allows you to fetch data from your GraphQL server and use it in building complex and reactive UIs using the React framework graphql-tag: this package provides helpful utilities for parsing GraphQL queries such as gql tag.

Open a new terminal and navigate to your project's folder, then run the following commands:

npm install graphql --save npm install apollo-client --save npm install apollo-cache-inmemory --save npm install apollo-link-http --save npm install react-apollo --save npm install graphql-tag --save

Now that we've installed the necessary packages, we need to create an instance of ApolloClient.

Open the src/index.js file and add the following code:

import { ApolloClient } from 'apollo-client'; import { InMemoryCache } from 'apollo-cache-inmemory'; import { HttpLink } from 'apollo-link-http'; const cache = new InMemoryCache(); const link = new HttpLink({ uri: '' }) const client = new ApolloClient({ cache, link })

We first create an instance of InMemoryCache, then an instance of HttpLink and we pass in our GraphQL API URI. Next, we create an instance of ApolloClient and we provide the cache and link instances.

Connect the Apollo Client to React Components

After creating the instance of ApolloClient, we need to connect it to our React component(s).

We'll use the new Apollo hooks, which allows us to easily bind GraphQL operations to our UI.

We can connect Apollo Client to our React app by simply wrapping the root App component with the ApolloProvider component — which is exported from the @apollo/react-hooks package — and passing the client instance via the client prop.

The ApolloProvider component is similar to React's Context provider. It wraps your React app and places the client in the context, which enables you to access it from anywhere in your app.

Now let's import the ApolloProvider component in our src/index.js file and wrap the App component as follows:

The post How to Build a Web App with GraphQL and React appeared first on SitePoint.

How to Build Your First Discord Bot with Node.js

Oct 30, 2019


Nowadays, bots are being used for automating various tasks. Since the release of Amazon's Alexa devices, the hype surrounding automation bots has only started to grow. Besides Alexa, other communication tools like Discord and Telegram offer APIs to develop custom bots.

This article will solely focus on creating your first bot with the exposed Discord API. Maybe the most well-known Discord bot is the Music Bot. The music bot lets you type a song name and the bot will attach a new user to your channel who plays the requested song. It’s a commonly used bot among younger people on gaming or streaming servers.

Let’s get started with creating a custom Discord bot.

Prerequisites Node.js v10 or higher installed (basic knowledge) a Discord account and Discord client basic knowledge of using a terminal Step 1: Setup Test Server

First of all, we need a test server on which we can later test our Discord bot. We can create a new server by clicking the plus icon in the left bottom corner.

click create server

A pop-up will be displayed that asks you if you want to join a server or create a new one. Of course, we want to create a new server.

select create server

Next, we need to input the name for our server. To keep things simple, I've named the server discord_playground. If you want, you can change the server location depending on where you're located to get a better ping.

server name

If everything went well, you should see your newly created server.

new server

Step 2: Generating Auth Token

When we want to control our bot via code, we need to register the bot first under our Discord account.

To register the bot, go to the Discord Developers Portal and log in with your account.

After logging in, you should be able to see the dashboard. Let's create a new application by clicking the New Application button.

developer dashboard

Next, you'll see a pop-up that asks you to input a name for your application. Let's call our bot my-greeter-bot. By clicking the Create button, Discord will create an API application.

create application

When the application has been created, you'll see the overview of the newly created my-greeter-bot application. You'll see information like a client ID and client secret. This secret will be used later as the authorization token.

overview greeter bot

Now, click on the Bot menu option in the Settings menu. Discord will build our my-greeter-bot application and add a bot user to it.

add bot

When the bot has been built, you get an overview of your custom bot. Take a look at the Token section. Copy this authorization token and write it down somewhere, as we'll need it later to connect to our bot user.

bot tab overview

The post How to Build Your First Discord Bot with Node.js appeared first on SitePoint.

What Is Functional Programming?

Oct 29, 2019


As a programmer, you probably want to write elegant, maintainable, scalable, predictable code. The principles of functional programming, or FP, can significantly aid in these goals.

Functional programming is a paradigm, or style, that values immutability, first-class functions, referential transparency, and pure functions. If none of those words makes sense to you, don't worry! We're going to break down all this terminology in this article.

Functional programming evolved from lambda calculus, a mathematical system built around function abstraction and generalization. As a result, a lot of functional programming languages look very mathematical. Good news, though: you don't need to use a functional programming language to bring functional programming principles to your code. In this post, we'll use JavaScript, which has a lot of features that make it amenable to functional programming while not being tied to that paradigm.

The Core Principles of Functional Programming

Now that we've discussed what functional programming is, let's talk about the core principles behind FP.

The post What Is Functional Programming? appeared first on SitePoint.

6 Tools for Debugging React Native

Oct 28, 2019


Debugging is an essential part of software development. It’s through debugging that we know what’s wrong and what’s right, what works and what doesn’t. Debugging provides the opportunity to assess our code and fix problems before they’re pushed to production.

debugging featured image

In the React Native world, debugging may be done in different ways and with different tools, since React Native is composed of different environments (iOS and Android), which means there’s an assortment of problems and a variety of tools needed for debugging.

Thanks to the large number of contributors to the React Native ecosystem, many debugging tools are available. In this brief guide, we’ll explore the most commonly used of them, starting with the Developer Menu.

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. — Brian W. Kernighan

The Developer Menu

the developer menu

The in-app developer menu is your first gate for debugging React Native, it has many options which we can use to do different things. Let’s break down each option.

Reload: reloads the app Debug JS Remotely: opens a channel to a JavaScript debugger Enable Live Reload: makes the app reload automatically on clicking Save Enable Hot Reloading: watches for changes accrued in a changed file Toggle Inspector: toggles an inspector interface, which allows us to inspect any UI element on the screen and its properties, and presents and interface that has other tabs like networking, which shows us the HTTP calls, and a tab for performance.

The post 6 Tools for Debugging React Native appeared first on SitePoint.

Why Your Agency Should Offer Managed Cloud Hosting to Clients

Oct 28, 2019


This article was created in partnership with Cloudways. Thank you for supporting the partners who make SitePoint possible.

When it comes to end-to-end services, digital agencies offer an impressive range. From requirement analysis to post-deployment maintenance, these agencies do everything to make sure that their clients are able to fully leverage their projects for maximum business efficiency.

In this backdrop, many agencies (particularly those that deal with web-based projects) also offer hosting as part of their services to their customers. While small and up-and-coming digital agencies might not have hosting on their service brochure, mid-tier and top-shelf agencies see hosting as an integral service offering to their clients.

Setting up Hosting for Customers

For a web-based project, web hosting is an essential component that determines the success (and failure) of the project. Since the agency has developed the project, many clients trust the agency-managed hosting for their project.

High-performance applications (online stores and CRM in particular) demand a hosting solution that’s able to keep pace with the high request volume and a large number of concurrent connections. Clients with these projects can’t compromise on the post-deployment performance of the applications. As such, agencies prefer an in-house hosting setup that caters to the specific requirements of the projects.

Agencies Benefit From In-house Hosting

Before going into what benefits agencies get from an in-house hosting setup, it’s important to understand the major requirements of high-performance projects. Without going too much into the details, in-house hosting solutions are set up to make sure that custom-built projects continue to perform on the following parameters:

the number of visitors per hour/day/month the number of simultaneous visitors the maximum number of connections allowed the number of simultaneous requests/orders the size and complexity of the products catalog (number of products, product categories, attributes) the content requirements and traffic on content assets such as blogs the volume of search queries on the site the size and connections on the database

With in-house hosting solutions, agencies (and their clients) get a whole range of benefits such as those outlined below.

Custom Hardware and Software

Hardware requirements for custom, high-performance projects generally include three components: CPU, RAM and Disk Space. Since each project has custom requirements that are often not available in off-the-shelf hosting solutions available in the market, agencies opt for setting up in-house hardware platforms for their customers.

Custom hardware setups usually cost more than the conventional, commercially available hosting hardware architecture. The cost of setting up and maintaining the hosting architecture is usually the responsibility of the dev agency, which usually bills the client for these services.

Another related (and in my opinion, more important) requirement of these projects is a custom environment that comprises an OS layer and a facilitation layer made of servers and caches. A custom environment allows agencies to build their projects without worrying about conflicts with the OS and server software required to execute the codebase. Thanks to in-house hosting, digital agencies can completely customize the OS and server layer to the project specifications.

End-to-End Management of Project Hosting

Project requirements change and clients often revise their requirements and scope. These changes also impact the hosting requirements and specifications. Since the hosting process is being managed in-house, the agency can take proactive actions to improve hosting setup specifications and ensure continued performance for the application.

Passive Income Stream

In almost all cases, agency-managed hosting solutions are built and maintained on the client’s dollars. The agency proposes hosting setup specifications and sets it up once the client pays for it. Once the setup is active, the client pays for the maintenance and upkeep of the hosting solution. This is a passive income channel that is often an important supplement to agency revenues.

Challenges In-agency Managed Hosting

Despite the benefits, managing an in-house hosting setup can prove to be a drag on the agency operations. In particular, agency-managed hosting causes the following challenges for the business processes.

Hosting Architecture Requires Continuous Attention

Since this is an in-house managed hosting solution, it’s obvious that the agency is responsible for keeping both the hardware and software layers operational. While the hardware layer (the physical server machines and the networking equipment) have a lower ratio of failure, it’s important to note that the software components of the hosting solution require detailed attention and upkeep.

Both hardware and software vendors regularly release patches that fix bugs and enhance product functionality. In many cases, these patches are mission-critical and essential for the continued performance of the project’s hosting. In in-house managed hosting, this is the responsibility of a dedicated team that performs no other function.

The Constant Need for Security

Web servers are the prime target of cybercriminals because of the wealth of information and user data on them. The problem with server security is that it's a full-time function that requires specialists on the team. The same goes for clients’ applications (CMSs such as WordPress are especially vulnerable) that could potentially open up security loopholes in the server and application security. Not many agencies can afford a dedicated infosec expert on the payroll. Thus, there's always the danger that clients’ applications can get hacked because the agency-managed hosting is unable to maintain the required security standards.

Sysadmins Prove to be an Overhead

Sysadmins are among the highest-paid professions in the ICT industry, and rightly so! They manage entire data centers and handle all aspects of hosting servers from provisioning to maintenance. The problem with sysadmins is the high recruitment and operational costs of these professionals. Thus, hiring a sysadmin to manage in-house hosting is a serious decision that's out of the budget of many dev agencies.

Deviation from the Core Business

Digital agencies are in the business of building applications and custom projects that create value for their clients. An in-house hosting solution requires competence that lies outside the normal scope of the dev agencies. In addition, managing hosting solutions require expenses that eat away into profits without generating enough revenue to justify their inclusion in business offerings.

Shared Hosting is a False Start

The good news is that many agencies are aware of the issues with in-house, agency-managed hosting and have come to realize that this is not the ideal solution for managing clients’ hosting focused expectations.

However, since the clients’ requirements continue to grow and the need for hosting solutions for custom-developed apps is on the rise, a number of agencies have turned to shared hosting as an alternative to agency managed in-house hosting solutions.

When opting for shared hosting solutions, agencies try to reduce the cost of hosting solutions while providing a comparable hosting solution to the clients.

Before going into the description of why shared hosting solutions are in fact counterproductive for dev agencies, it's important to understand how shared hosting solutions work.

Shared Hosting in a Nutshell

As the name implies, shared hosting is a solution where several websites/applications are hosted on a single physical server. This means that the physical resources (CPU, RAM, Disk space and bandwidth (in some cases) get shared among the websites hosted on the server.

While this is not a bad solution per se, it's not the right ft for high-performance applications. These applications have a minimum server resource requirements that often exceed the allocated “quota” allocated by the shared hosting server.

Many digital agencies try to integrate shared hosting solutions in their customer-focused services by eliminating sysadmins from the equation and asking the developers to manage the hosting servers for the clients.

The post Why Your Agency Should Offer Managed Cloud Hosting to Clients appeared first on SitePoint.

How to Build a Tic Tac Toe Game with Svelte

Oct 24, 2019


How to Build a Tic Tac Toe Game with Svelte

Svelte is a next generation way of building user interfaces.

While frameworks like React, Vue and Angular do the bulk of their work in the browser, Svelte takes it to the next level. It does its work when you build the app and it compiles your Svelte app to efficient vanilla JavaScript. So you get the best of both worlds. You write your code in Svelte which makes it easy to read, re-use and all the other benefits you get when you use a framework, and it makes for a blazing-fast web app as it complies down to vanilla JavaScript so that you don’t have the overhead of the JavaScript framework you’re using.

Svelte allows you to write less code. It also doesn’t use the concept of the Virtual DOM popularized by React. It instead surgically updates the DOM when the state of the app changes so the app starts fast and stays fast.


For this tutorial, you need a basic knowledge of HTML, CSS and JavaScript.

You must also have installed the latest version of Node.js.

We’ll also be using npx, which comes installed by default with Node.js.

Throughout this tutorial we’ll be using yarn. If you don’t have yarn already installed, install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 12.10.0 npx 6.11.3 yarn 1.17.3 Getting Started with Svelte

In this tutorial, we’ll be building a Tic Tac Toe game in Svelte. By the end, you’ll be able to get up and running quickly with Svelte and get started in building your own apps in Svelte.

To get started, we must scaffold our app using degit. degit is more or less the same as git clone, but much quicker. You can learn more about it here.

Go ahead and make a new project by typing the following in the terminal:

$ npx degit sveltejs/template tic-tac-toe-svelte

npx lets you use the degit command without installing it globally.

Before npx, we would have to do the two following steps to achieve the same result:

$ npm install --global degit $ degit sveltejs/template tic-tac-toe-svelte

Thanks to npx, we don’t bloat our global namespace, and we always use the latest version of degit.

degit clones the repo into a tic-tac-toe-svelte folder.

Go ahead into the tic-tac-toe-svelte directory and install the dependencies by typing the following in the terminal:

$ cd tic-tac-toe-svelte $ yarn

Now run the application by typing the following in the terminal:

$ yarn dev

Now open up the browser and go to http://localhost:5000 and you should see the following:

Svelte - Hello World

If you go into the src/ folder, you’ll see two files, App.svelte and main.js. main.js is the entry point of a Svelte app.

Open up the main.js and you should see the following:

import App from './App.svelte'; const app = new App({ target: document.body, props: { name: 'world' } }); export default app;

The above file imports App.svelte and instantiates it using a target element. It puts the component on the DOM’s document.body. It also passes name props to the App component. This prop will be accessed in App.svelte.

Components in Svelte are written using .svelte files which contain HTML, CSS and JavaScript. This will look familiar if youse worked with Vue.

Now open up App.svelte and you should see the following:

<script> export let name; </script> <style> h1 { color: purple; } </style> <h1>Hello {name}!</h1>

Firstly, we have the script tag inside, in which we have a named export called name. This should be similar to the prop mentioned in main.js.

Then we have a style tag that lets us style all the elements in that particular file, which is scoped to that file only so there’s no issue of cascading.

Then, at the bottom, we have an h1 tag, inside which we have Hello {name}!. The name in curly brackets will be replaced by the actual value. This is called value interpolation. That’s why Hello world! is printed on the screen.

Basic Structure of a Svelte Component

All .svelte files will basically have the following structure:

<script> /* Javascript logic */ </script> <style> /* CSS styles */ </style> <!-- HTML markup -->

The HTML markup will have some additional Svelte-specific syntax, but the rest is just plain HTML, CSS and JavaScript.

Making Tic Tac Toe in Svelte

Let’s get started with building our Tic Tac Toe game.

Replace main.js with the following:

import App from './App.svelte' const app = new App({ target: document.body, }) export default app

We’ve basically removed the props property from App component instantiation.

Now replace App.svelte with the following:

<script> const title = "Tic Tac Toe"; </script> <svelte:head> <title>{title}</title> </svelte:head> <h1>{title}</h1>

Here, we initialize a constant variable title with a string Tic Tac Toe.

Then, in the markup below, we use a special Svelte syntax, svelte:head, to set the title property in the head tag.

This is basically similar to doing this:

<head> <title>Tic Tac Toe</title> </head>

But the advantage of using the svelte:head syntax is that the title can be changed at runtime.

We then use the same title property in our h1 tag. It should now look like this:

Svelte - Tic Tac Toe

Now create two other files in the src/ directory named Board.svelte and Square.svelte.

Open Square.svelte and paste in the following:

<script> export let value; </script> <style> .square { flex: 1 0 25%; width: 50px; height: 70px; background-color: whitesmoke; border: 2px solid black; margin: 5px; padding: 5px; font-size: 20px; text-align: center; } .square:hover { border: 2px solid red; } </style> <button class="square">{value}</button>

Basically, we’re creating a button and styling it.

Now open up Board.svelte and paste the following:

<script> import Square from "./Square.svelte"; let squares = [null, null, null, null, null, null, null, null, null]; </script> <style> .board { display: flex; flex-wrap: wrap; width: 300px; } </style> <div class="board"> {#each squares as square, i} <Square value={i} /> {/each} </div>

Here we’ve imported the Square component. We’ve also initialized the squares array, which will contain our X and 0’s data which is currently null.

The post How to Build a Tic Tac Toe Game with Svelte appeared first on SitePoint.

How to Install Docker on Windows 10 Home

Oct 22, 2019


How to Install Docker on Windows 10 Home

If you've ever tried to install Docker for Windows, you've probably came to realize that the installer won't run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you plan on running Windows Containers, you'll need a specific version and build of Windows Server. Check out the Windows container version compatibility matrix for details.

99.999% of the time, you only need a Linux Container, since it supports software built using open-source and .NET technologies. In addition, Linux Containers can run on any distro and on popular CPU architectures, including x86_64, ARM and IBM.

In this tutorial, I'll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine. Here's a list of software you'll need to build and run Docker containers:

Docker Machine: a CLI tool for installing Docker Engine on virtual hosts Docker Engine: runs on top of the Linux Kernel; used for building and running containers Docker Client: a CLI tool for issuing commands to Docker Engine via REST API Docker Compose: a tool for defining and running multi-container applications

I'll show how to perform the installation in the following environments:

On Windows using Git Bash On Windows Subsystem for Linux 2 (running Ubuntu 18.04)

First, allow me to explain how the Docker installation will work on Windows.

How it Works

As you probably know, Docker requires a Linux kernel to run Linux Containers. For this to work on Windows, you'll need to set up a Linux virtual machine to run as guest in Windows 10 Home.

docker windows home

Setting up the Linux VM can be done manually. The easiest way is to use Docker Machine to do this work for you by running a single command. This Docker Linux VM can either run on your local system or on a remote server. Docker client will use SSH to communicate with Docker Engine. Whenever you create and run images, the actual process will happen within the VM, not on your host (Windows).

Let's dive into the next section to set up the environment needed to install Docker.

Initial Setup

You may or may not have the following applications installed on your system. I'll assume you don't. If you do, make sure to upgrade to the latest versions. I'm also assuming you're running the latest stable version of Windows. At the time of writing, I'm using Windows 10 Home version 1903. Let's start installing the following:

Install Git Bash for Windows. This will be our primary terminal for running Docker commands.

Install Chocolatey, a package manager for Windows. It will make the work of installing the rest of the programs easier.

Install VirtualBox and its extension. Alternatively, If you have finished installing Chocolatey, you can simply execute this command inside an elevated PowerShell terminal:

C:\ choco install virtualbox

If you'd like to try running Docker inside the WSL2 environment, you'll need to set up WSL2 first. You can follow this tutorial for step-by-step instructions.

Docker Engine Setup

Installing Docker Engine is quite simple. First we need to install Docker Machine.

Install Docker Machine by following instructions on this page. Alternatively, you can execute this command inside an elevated PowerShell terminal:

C:\ choco install docker-machine

Using Git Bash terminal, use Docker Machine to install Docker Engine. This will download a Linux image containing the Docker Engine and have it run as a VM using VirtualBox. Simply execute the following command:

$ docker-machine create --driver virtualbox default

Next, we need to configure which ports are exposed when running Docker containers. Doing this will allow us to access our applications via localhost<:port>. Feel free to add as many as you want. To do this, you'll need to launch Oracle VM VirtualBox from your start menu. Select default VM on the side menu. Next click on Settings > Network > Adapter 1 > Port Forwarding. You should find the ssh forwarding port already set up for you. You can add more like so:

docker vm ports

Next, we need to allow Docker to mount volumes located on your hard drive. By default, you can only mount from the C://Users/ directory. To add a different path, simply go to Oracle VM VirtualBox GUI. Select default VM and go to Settings > Shared Folders. Add a new one by clicking the plus symbol. Enter the fields like so. If there's an option called Permanent, enable it.

docker vm volumes

To get rid of the invalid settings error as seen in the above screenshot, simply increase Video Memory under the Display tab in the settings option. Video memory is not important in this case, as we'll run the VM in headless mode.

To start the Linux VM, simply execute this command in Git Bash. The Linux VM will launch. Give it some time for the boot process to complete. It shouldn't take more than a minute. You'll need to do this every time you boot your host OS:

$ docker-machine start vbox

Next, we need to set up our Docker environment variables. This is to allow the Docker client and Docker Compose to communicate with the Docker Engine running in the Linux VM, default. You can do this by executing the commands in Git Bash:

# Print out docker machine instance settings $ docker-machine env default # Set environment variables using Linux 'export' command $ eval $(docker-machine env default --shell linux)

You'll need to set the environment variables every time you start a new Git Bash terminal. If you'd like to avoid this, you can copy eval output and save it in your .bashrc file. It should look something like this:

export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://" export DOCKER_CERT_PATH="C:\Users\Michael Wanyoike\.docker\machine\machines\default" export DOCKER_MACHINE_NAME="default" export COMPOSE_CONVERT_WINDOWS_PATHS="true"

IMPORTANT: for the DOCKER_CERT_PATH, you'll need to change the Linux file path to a Windows path format. Also take note that there's a chance the IP address assigned might be different from the one you saved every time you start the default VM.

In the next section, we'll install Docker Client and Docker Compose.

The post How to Install Docker on Windows 10 Home appeared first on SitePoint.

6 Top VPNs for Web Developers to Choose

Oct 21, 2019


This article was created by VPN Review. Thank you for supporting the partners who make SitePoint possible.

Do you need a VPN? You're probably familiar with them, but they work like this: you connect to a server, all your internet queries pass through it as an intermediary, and they are passed on from this server to the external net. You can use a VPN to, for example, bypass geographic content restrictions by connecting to a server in the appropriate country.

For web developers, these features are essentially useful. First, when it's necessary to connect to sensitive development servers from outside networks, a VPN can make it safer by encrypting the traffic passing through your local network. This reduces the danger that the information you send to those servers will be accessed by hackers. And while testing geolocation features, a VPN allows you to connect as a user from the appropriate destination.

The downside is that the VPN needs to have access to your packets in order to relay them. You need to choose VPNs carefully, and it's especially worth being careful with new services, and even free ones. Note that it's important to use your company's VPN for development, or a service that has been approved for usage by your team.

To choose a service one should carefully read all the VPN Reviews on the web, search for the experts' opinions, and consider the product's reputation.

Let's analyze the most widespread VPN services:

The post 6 Top VPNs for Web Developers to Choose appeared first on SitePoint.

A Beginner’s Guide to Keras: Digit Recognition in 30 Minutes

Oct 18, 2019


A Beginner's Guide to Keras

Over the last decade, the use of artificial neural networks (ANNs) has increased considerably. People have used ANNs in medical diagnoses, to predict Bitcoin prices, and to create fake Obama videos! With all the buzz about deep learning and artificial neural networks, haven't you always wanted to create one for yourself? In this tutorial, we'll create a model to recognize handwritten digits.

We use the keras library for training the model in this tutorial. Keras is a high-level library in Python that is a wrapper over TensorFlow, CNTK and Theano. By default, Keras uses a TensorFlow backend by default, and we'll use the same to train our model.

Artificial Neural Networks Artificial neural networkImage source

An artificial neural network is a mathematical model that converts a set of inputs to a set of outputs through a number of hidden layers. An ANN works with hidden layers, each of which is a transient form associated with a probability. In a typical neural network, each node of a layer takes all nodes of the previous layer as input. A model may have one or more hidden layers.

ANNs receive an input layer to transform it through hidden layers. An ANN is initialized by assigning random weights and biases to each node of the hidden layers. As the training data is fed into the model, it modifies these weights and biases using the errors generated at each step. Hence, our model "learns" the pattern when going through the training data.

Convoluted Neural Networks

In this tutorial, we're going to identify digits — which is a simple version of image classification. An image is essentially a collection of dots or pixels. A pixel can be identified through its component colors (RGB). Therefore, the input data of an image is essentially a 2D array of pixels, each representing a color.

If we were to train a regular neural network based on image data, we'd have to provide a long list of inputs, each of which would be connected to the next hidden layer. This makes the process difficult to scale up.

Convolutional Neural Network ArchitectureConvolutional Neural Network Architecture

In a convoluted neural network (CNN), the layers are arranged in a 3D array (X-axis coordinate, Y-axis coordinate and color). Consequently, a node of the hidden layer would only be connected to a small region in the vicinity of the corresponding input layer, making the process far more efficient than a traditional neural network. CNNs, therefore, are popular when it comes to working with images and videos.

Convolutional Neural Network LayersConvolutional Neural Network Layers

The various types of layers in a CNN are as follows:

convolutional layers: these run input through certain filters, which identify features in the image pooling layers: these combine convolutional features, helping in feature reduction flatten layers: these convert an N-dimentional layer to a 1D layer classification layer: the final layer, which tells us the final result.

Let's now explore the data.

Explore MNIST Dataset

As you may have realized by now, we need labelled data to train any model. In this tutorial, we'll use the MNIST dataset of handwritten digits. This dataset is a part of the Keras package. It contains a training set of 60000 examples, and a test set of 10000 examples. We'll train the data on the training set and validate the results based on the test data. Further, we'll create an image of our own to test whether the model can correctly predict it.

First, let's import the MNIST dataset from Keras. The .load_data() method returns both the training and testing datasets:

from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data()

Let's try to visualize the digits in the dataset. If you're using Jupyter notebooks, use the following magic function to show inline Matplotlib plots:

%matplotlib inline

Next, import the pyplot module from matplotlib and use the .imshow() method to display the image:

import matplotlib.pyplot as plt image_index = 35 print(y_train[image_index]) plt.imshow(x_train[image_index], cmap='Greys')

The label of the image is printed and then the image is displayed.

label printed and image displayed

Let's verify the sizes of the training and testing datasets:

print(x_train.shape) print(x_test.shape)

Notice that each image has the dimensions 28 x 28:

(60000, 28, 28) (10000, 28, 28)

Next, we may also wish to explore the dependent variable, stored in y_train. Let's print all labels until the digit that we visualized above:

print(y_train[:image_index + 1]) [5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 3 8 6 9 0 5] Cleaning Data

Now that we've seen the structure of the data, let's work on it further before creating the model.

To work with the Keras API, we need to reshape each image to the format of (M x N x 1). We'll use the .reshape() method to perform this action. Finally, normalize the image data by dividing each pixel value by 255 (since RGB value can range from 0 to 255):

# save input image dimensions img_rows, img_cols = 28, 28 x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) x_train /= 255 x_test /= 255

Next, we need to convert the dependent variable in the form of integers to a binary class matrix. This can be achieved by the to_categorical() function:

from keras.utils import to_categorical num_classes = 10 y_train = to_categorical(y_train, num_classes) y_test = to_categorical(y_test, num_classes)

We're now ready to create the model and train it!

The post A Beginner’s Guide to Keras: Digit Recognition in 30 Minutes appeared first on SitePoint.

8 Ways to Style React Components Compared

Oct 17, 2019


8 Ways to Style React Components Compared

I've been working with a couple of developers in my office on React JS projects, who have varied levels of React JS experience. We've been solving some crazy problems like handling the weird way Redux does state initialization and making an axios request payload work with PHP and understanding what goes on in the background. This article arose out of a question about how to style React components.

The Various Styling Approaches

There are various ways to style React Components. Choosing the right method for styling components isn't a perfect absolute. It's a specific decision that should serve your particular use case, personal preferences and above all, architectural goals of the way you work. For example, I make use of notifications in React JS using Noty, and the styling should be able to handle plugins too.

Some of my goals in answering the question included covering these:

Global namespacing Dependencies Reusability Scalability Dead-code Elimination

There seems to be about eight different ways of styling React JS components used widely in the industry for production level work:

Inline CSS Normal CSS CSS in JS Styled Components CSS Modules Sass & SCSS Less Stylable

For each method, I'll look at the need for dependencies, the difficulty level, and whether or not the approach is really a good one or not.

Inline CSS Dependencies: None Difficulty: Easy Approach: Worst

I don't think anyone needs an introduction to inline CSS. This is the CSS styling sent to the element directly using the HTML or JSX. You can include a JavaScript object for CSS in React components. There are a few restrictions like replacing every - with camelCase text. You can style them in two ways using JavaScript objects as shown in the example.

Example import React from "react"; const spanStyles = { color: "#fff", borderColor: "#00f" }; const Button = props => ( <button style={{ color: "#fff", borderColor: "#00f" }}> <span style={spanStyles}>Button Name</span> </button> ); Regular CSS Dependencies: None Difficulty: Easy Approach: Okay

Regular CSS is a common approach, arguably one step better than inline CSS. The styles can be imported to any number of pages and elements unlike inline CSS, which is applied directly to the particular element. Normal CSS has several advantages, such as decreasing the file size with a clean code structure.

You can maintain any number of style sheets, and it can be easier to change or customize styles when needed. But regular CSS might be a major problem if you're working on a bigger project with lots of people involved, especially without an agreed pattern to do styling in CSS.

Example a:link { color: gray; } a:visited { color: green; } a:hover { color: rebeccapurple; } a:active { color: teal; } More Information

You can read more about regular CSS usage of the W3C's Learning CSS page. There are many playgrounds like JS Bin - Collaborative JavaScript Debugging, JSFiddle, CodePen: Build, Test, and Discover Front-end Code, - The world's leading online coding platform, etc. where you can try them out live and get the results in real time.

CSS in JS Dependencies: jss, jss-preset-default, jss-cli Difficulty: Easy Approach: Decent

CSS in JS is an authoring tool for CSS which allows you to use JavaScript to describe styles in a declarative, conflict-free and reusable way. It can compile in the browser, on the server side or at build time in Node. It uses JavaScript as a language to describe styles in a declarative and maintainable way. It's a high performance JS-to-CSS compiler which works at runtime and server-side. When thinking in components, you no longer have to maintain a bunch of style sheets. CSS-in-JS abstracts the CSS model to the component level, rather than the document level (modularity).

Example import React from "react"; import injectSheet from "react-jss"; // Create your Styles. Remember, since React-JSS uses the default preset, // most plugins are available without further configuration needed. const styles = { myButton: { color: "green", margin: { // jss-expand gives more readable syntax top: 5, // jss-default-unit makes this 5px right: 0, bottom: 0, left: "1rem" }, "& span": { // jss-nested applies this to a child span fontWeight: "bold" // jss-camel-case turns this into 'font-weight' } }, myLabel: { fontStyle: "italic" } }; const Button = ({ classes, children }) => ( <button className={classes.myButton}> <span className={classes.myLabel}>{children}</span> </button> ); // Finally, inject the stylesheet into the component. const StyledButton = injectSheet(styles)(Button); More Information

You can learn more about this approach in the JSS official documentation. There's also a way to try it out using their REPL (Read-eval-print Loop).

Styled Components Dependencies: styled-components Difficulty: Medium Approach: Decent

Styled-components is an example of the above-mentioned CSS in JS. It basically gives us CSS with other properties you wish we had in CSS like nesting. It also allows us to style the CSS under the variable created in JavaScript. You could normally create a React component along with the styles attached to it without having to create a separate file for CSS. Styled-components allows us to create custom reusable components which can be less of a hassle to maintain. Props can be used in styling the components in the same way it is passed in the React components. Props are used instead of classes in CSS and set the properties dynamically.

Example import React from "react"; import styled, { css } from "styled-components"; const Button = styled.button` cursor: pointer; background: transparent; font-size: 16px; border-radius: 3px; color: palevioletred; margin: 0 1em; padding: 0.25em 1em; transition: 0.5s all ease-out; ${props => props.primary && css` background-color: white; color: green; border-color: green; `}; `; export default Button; More Information

Styled-components has a detailed documentation and the site also provides a live editor where you can try out the code. Get more information on styled components at styled-components: Basics.

CSS Modules Dependencies: css-loader Difficulty: Tough (Uses Loader Configuration) Approach: Better

If you've ever felt like the CSS global scope problem takes up most of your time when you have to find what a particular style does, or if you've had to write a CSS file without organizing it properly to make the code work first, or if getting rid of the files gives you a slight nudge in your heart wondering if you might break the whole code, I feel you. CSS Modules make sure that all of the styles for a component are at one single place and apply to that particular component. This certainly solves the global scope problem of CSS. The composition feature acts as a weapon to represent shared styles between the states. It's similar to the mixin in Sass, making it possible to combine multiple groups of styles.

Example import React from "react"; import style from "./panel.css"; const Panel = () => ( <div className={style.panelDefault}> <div className={style.panelBody}>A Basic Panel</div> </div> ); export default Panel; .panelDefault { border-color: #ddd; } .panelBody { padding: 15px; } Sass & SCSS Dependencies: node-sass Difficulty: Easy Approach: Best

Sass claims that it's the most mature, stable, and powerful professional grade CSS extension language in the world. It's a CSS preprocessor, which adds special features such as variables, nested rules and mixins (sometimes referred to as “syntactic sugar”) into regular CSS. The aim is to make the coding process simpler and more efficient. Just like other programming languages, Sass allows the use of variables, nesting, partials, imports and functions, which add super powers to regular CSS.

Example $font-stack: 'Open Sans', sans-serif; $primary-color: #333; body { font: 100% $font-stack; color: $primary-color; } More Information

Learn more about using and installing Sass with a variety of programming languages from their official documentation at Sass: Syntactically Awesome Style Sheets. If you want to try something out, there’s a service called SassMeister - The Sass Playground! where you can play around with different features of Sass and SCSS.

The post 8 Ways to Style React Components Compared appeared first on SitePoint.

Improving the Customer Journey with Flatfile’s Data Importer

Oct 17, 2019


This article was created in partnership with Thank you for supporting the partners who make SitePoint possible.

Close your eyes and imagine what it is like to import data into your application. Now open them.

Does it look something like this?

When creating a new or improving an existing product, building useful features is, of course, priority number one. What many forget, though, is that in innovation is wasted when the product experience is not intuitive. When crafting the customer journey, a product manager must pay particular attention to the customer's first steps.

In many products, the data importing process is one of those early steps. Unfortunately, it is often a major pain point for the customer. This isn't all the PM's fault; we've come to expect data import to be a lousy software experience. So we keep sending customers to complex templates, long support articles, and cryptic error screens, often within the first few minutes of their journey.

Not anymore, though. Flatfile offers a simple solution: an intuitive, plug-and-play data importer.

What Is Flatfile?

Flatfile offers software companies an alternative to building their own data importers.

For users, that means no more jumping through hoops to upload their data. Now, they can use Flatfile's platform instead for a seamless, smooth, and supportive data importing experience. Flatfile is designed to support users of any technical skill level: firefighters, real estate agents, and data analysts all leverage the Flatfile Importer.

For PMs, that means no more worrying about handling the UX and engineering complexities of data import. Instead of planning a whole sprint - if not several - on building a custom solution, PMs can hand their engineering team Flatfile's documentation and get an elegant, crafted experience in a day. Put simply, Flatlife takes the pain out of building and maintaining a data importer, and lets product teams focus on innovative, differentiating features.

How Does Flatfile Work?

Flatlife lets users upload their spreadsheets with just a few clicks. They can also manually enter their data.

Once the data has been uploaded, Flatfile asks the user a few simple questions about how their spreadsheet matches to your product, ensuring that the data is aligned with the correct field (e.g. first name, last name, email address, etc.).

The final step is data repair, where the user can review and update their import to correct any data errors. These errors appear based on validations you can pre-define, ensuring the tidiness of data before it ever hits your product database.

Once these steps are complete, the user is back in your application, and you have a clean, structured set of JSON data that's easy to pull into any database.

Meghann, Product Lead at says: "When we were looking for solutions, we knew we could either build it ourselves or try to find something. Our product lead at the time heard of Flatfile. He presented it to the team, and ultimately we decided to implement Flatfile. We didn't see anything else on the market."

Why Should You Choose Flatfile?

When a user is importing data to your product, they want to use it and they want to see its value. Don't let them get hung up on spreadsheet templates and intimidating documentation. Flatfile takes most organizations less than a week to implement, and it gives users a simple, smooth, and delightful data import experience. Get started with a 30 day free trial and see how can improve your customer journey.


The post Improving the Customer Journey with Flatfile’s Data Importer appeared first on SitePoint.

Build a JavaScript Command Line Interface (CLI) with Node.js

Oct 16, 2019


Build a Node CLI

As great as Node.js is for “traditional” web applications, its potential uses are far broader. Microservices, REST APIs, tooling, working with the Internet of Things and even desktop applications: it’s got your back.

Another area where Node.js is really useful is for building command-line applications — and that’s what we’re going to be doing in this article. We’re going to start by looking at a number of third-party packages designed to help work with the command line, then build a real-world example from scratch.

What we’re going to build is a tool for initializing a Git repository. Sure, it’ll run git init under the hood, but it’ll do more than just that. It will also create a remote repository on GitHub right from the command line, allow the user to interactively create a .gitignore file, and finally perform an initial commit and push.

As ever, the code accompanying this tutorial can be found on our GitHub repo.

Build a Node CLI

Why Build a Command-line Tool with Node.js?

Before we dive in and start building, it’s worth looking at why we might choose Node.js to build a command-line application.

The most obvious advantage is that, if you’re reading this, you’re probably already familiar with it — and, indeed, with JavaScript.

Another key advantage, as we’ll see as we go along, is that the strong Node.js ecosystem means that among the hundreds of thousands of packages available for all manner of purposes, there are a number which are specifically designed to help build powerful command-line tools.

Finally, we can use npm to manage any dependencies, rather than have to worry about OS-specific package managers such as Aptitude, Yum or Homebrew.

Tip: that isn’t necessarily true, in that your command-line tool may have other external dependencies.

What We’re Going to Build: ginit

Ginit, our Node CLI in action

For this tutorial, We’re going to create a command-line utility which I’m calling ginit. It’s git init, but on steroids.

You’re probably wondering what on earth that means.

As you no doubt already know, git init initializes a git repository in the current folder. However, that’s usually only one of a number of repetitive steps involved in the process of hooking up a new or existing project to Git. For example, as part of a typical workflow, you may well:

initialise the local repository by running git init create a remote repository, for example on GitHub or Bitbucket — typically by leaving the command line and firing up a web browser add the remote create a .gitignore file add your project files commit the initial set of files push up to the remote repository.

There are often more steps involved, but we’ll stick to those for the purposes of our app. Nevertheless, these steps are pretty repetitive. Wouldn’t it be better if we could do all this from the command line, with no copy-pasting of Git URLs and such like?

So what ginit will do is create a Git repository in the current folder, create a remote repository — we’ll be using GitHub for this — and then add it as a remote. Then it will provide a simple interactive “wizard” for creating a .gitignore file, add the contents of the folder and push it up to the remote repository. It might not save you hours, but it’ll remove some of the initial friction when starting a new project.

With that in mind, let’s get started.

The Application Dependencies

One thing is for certain: in terms of appearance, the console will never have the sophistication of a graphical user interface. Nevertheless, that doesn’t mean it has to be plain, ugly, monochrome text. You might be surprised by just how much you can do visually, while at the same time keeping it functional. We’ll be looking at a couple of libraries for enhancing the display: chalk for colorizing the output and clui to add some additional visual components. Just for fun, we’ll use figlet to create a fancy ASCII-based banner and we’ll also use clear to clear the console.

In terms of input and output, the low-level Readline Node.js module could be used to prompt the user and request input, and in simple cases is more than adequate. But we’re going to take advantage of a third-party package which adds a greater degree of sophistication — Inquirer. As well as providing a mechanism for asking questions, it also implements simple input controls: think radio buttons and checkboxes, but in the console.

We’ll also be using minimist to parse command-line arguments.

Here’s a complete list of the packages we’ll use specifically for developing on the command line:

chalk — colorizes the output clear — clears the terminal screen clui — draws command-line tables, gauges and spinners figlet — creates ASCII art from text inquirer — creates interactive command-line user interface minimist — parses argument options configstore — easily loads and saves config without you having to think about where and how.

Additionally, we’ll also be using the following:

@octokit/rest — a GitHub REST API client for Node.js lodash — a JavaScript utility library simple-git — a tool for running Git commands in a Node.js application touch — a tool for implementating the Unix touch command. Getting Started

Although we’re going to create the application from scratch, don’t forget that you can also grab a copy of the code from the repository which accompanies this article.

Create a new directory for the project. You don’t have to call it ginit, of course:

mkdir ginit cd ginit

Create a new package.json file:

npm init

Follow the simple wizard, for example:

name: (ginit) version: (1.0.0) description: "git init" on steroids entry point: (index.js) test command: git repository: keywords: Git CLI author: [YOUR NAME] license: (ISC)

Now install the dependencies:

npm install chalk clear clui figlet inquirer minimist configstore @octokit/rest lodash simple-git touch --save

Alternatively, simply copy-paste the following package.json file — modifying the author appropriately — or grab it from the repository which accompanies this article:

{ "name": "ginit", "version": "1.0.0", "description": "\"git init\" on steroids", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [ "Git", "CLI" ], "author": "Lukas White <>", "license": "ISC", "bin": { "ginit": "./index.js" }, "dependencies": { "@octokit/rest": "^14.0.5", "chalk": "^2.3.0", "clear": "0.0.1", "clui": "^0.3.6", "configstore": "^3.1.1", "figlet": "^1.2.0", "inquirer": "^5.0.1", "lodash": "^4.17.4", "minimist": "^1.2.0", "simple-git": "^1.89.0", "touch": "^3.1.0" } }

Now create an index.js file in the same folder and require the following dependencies:

const chalk = require('chalk'); const clear = require('clear'); const figlet = require('figlet'); Adding Some Helper Methods

We’re going to create a lib folder where we’ll split our helper code into modules:

files.js — basic file management inquirer.js — command-line user interaction github.js — access token management repo.js — Git repository management.

Let’s start with lib/files.js. Here, we need to:

get the current directory (to get a default repo name) check whether a directory exists (to determine whether the current folder is already a Git repository by looking for a folder named .git).

This sounds straightforward, but there are a couple of gotchas to take into consideration.

Firstly, you might be tempted to use the fs module’s realpathSync method to get the current directory:


This will work when we’re calling the application from the same directory (e.g. using node index.js), but bear in mind that we’re going to be making our console application available globally. This means we’ll want the name of the directory we’re working in, not the directory where the application resides. For this purpose, it’s better to use process.cwd:


Secondly, the preferred method of checking whether a file or directory exists keeps changing.The current way is to use fs.stat or fs.statSync. These throw an error if there’s no file, so we need to use a try … catch block.

Finally, it’s worth noting that when you’re writing a command-line application, using the synchronous version of these sorts of methods is just fine.

Putting that all together, let’s create a utility package in lib/files.js:

const fs = require('fs'); const path = require('path'); module.exports = { getCurrentDirectoryBase : () => { return path.basename(process.cwd()); }, directoryExists : (filePath) => { try { return fs.statSync(filePath).isDirectory(); } catch (err) { return false; } } };

Go back to index.js and ensure you require the new file:

const files = require('./lib/files');

With this in place, we can start developing the application.

Initializing the Node CLI

Now let’s implement the start-up phase of our console application.

In order to demonstrate some of the packages we’ve installed to enhance the console output, let’s clear the screen and then display a banner:

clear(); console.log( chalk.yellow( figlet.textSync('Ginit', { horizontalLayout: 'full' }) ) );

The output from this is shown below.

The welcome banner on our Node CLI, created using Chalk and Figlet

Next up, let’s run a simple check to ensure that the current folder isn’t already a Git repository. That’s easy: we just check for the existence of a .git folder using the utility method we just created:

if (files.directoryExists('.git')) { console.log('Already a git repository!')); process.exit(); }

Tip: notice we’re using the chalk module to show a red-colored message.

The post Build a JavaScript Command Line Interface (CLI) with Node.js appeared first on SitePoint.

6 Popular Portfolio Builders for Designers

Oct 16, 2019


Popular Portfolio Builders for Designers

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Doing a great job of showcasing your work doesn’t have to be difficult. You don’t have to print out a batch of fancy brochures to distribute or carry around a folder containing a sheaf of papers. You’ll get far better results by using a website builder to create your own personal online portfolio for the whole world to see.

Building a portfolio website is relatively easy if you have the right tool for the job. If you can envision an awesome, engaging portfolio, you can build it. Especially with any of the six portfolio website tools described in this article. Since all six are top-of-the-line tools, there’s no reason to settle for anything else.

Best of all, these website building tools are either free or offer free trials.

1. Portfoliobox

A screenshot of the Portfoliobox website

Portfoliobox was designed with photographers, artists, web designers, and other creative types in mind. Small business owners and entrepreneurs will find it attractive as well. This portfolio website builder is super easy to use, and since it’s not theme-based, it’s extremely flexible as well. Another plus is you don’t have to worry about coding.

As its name implies, this is a perfect tool for creating portfolio websites that range in style and looks from the epitome of professionalism to flat-out awesome. You can display your work or your products to the world, and by doing so you’ll hopefully earn a bushel of dollars, euros, or whatever, as well.

We suggest you try the free plan. In that way, you can get acquainted with Portfoliobox while having the tools at hand to create a medium-sized portfolio. If you have large galleries of images in mind, you may eventually want to upgrade to the pro plan. If you’re a student, opening a student account may be your best move.

Portfoliobox 4 is currently in the works and coming soon. Features include increased flexibility and functionality and a more intuitive interface. Portfoliobox has more than one million users.

2. Wix

Start creating your portfolio with Wix

Wix is a versatile and powerful website building tool you can use with great effect to promote your business or build an online shop. Where Wix really shines, however, is in the role of a portfolio website builder. Everything is drag and drop, supported by the necessary tools and features to customize any of the 500+ designer-made templates you choose to work with.

If you can visualize an online portfolio that’s truly stunning and a cut above the rest, you can build it — without coding. Rather than being restricted to trying to cleverly present a series of static images, you can use scroll effects, animations, video backgrounds and more to bring your portfolio to life and keep viewers engaged and encouraged to spread the word.

If you want total freedom to create your crowd-pleasing portfolio website, Wix is for you.


Weebly screenshot

We said up front that you shouldn’t have to settle for less than the best, and that certainly applies to the Weebly portfolio website builder. What you design and build is limited only by your imagination, and if your technical expertise is somewhat challenged and you lack coding experience it doesn’t matter one bit. Everything you need is at your fingertips.

If “free” appeals to you, that’s just one more reason to go with Weebly. The website builder is free, hosting is free, and there’s even a mobile app you can use to manage your portfolio website and track its performance — from anywhere.

You can either purchase a domain from Weebly or use your own. If you need professional photos for your portfolio, Weebly can provide them at an affordable price.

4. Mobirise Website Builder

Mobirise screenshot

Since Mobirise is an offline website builder, you can download it and get started building an awesome portfolio website right away. No coding is necessary. Google AMP and Bootstrap 4 guarantees your website will be lightning-fast and 100% mobile friendly.

You’ll have plenty of trendy themes and templates to work with. Best of all, Mobirise is free for both personal and commercial uses — making it a very attractive option.

The post 6 Popular Portfolio Builders for Designers appeared first on SitePoint.

Go off Grid: Offline Reader for SitePoint Premium Now in Beta

Oct 15, 2019


We've done a massive amount of work on the SitePoint Premium experience this year, but users have been very clear about what they want to see next.

Our most requested feature is offline access to books in the SitePoint Premium library, and today, it's here.

We've been working on this for a long time and we're very excited to release what we think is a great way to read these books offline. But we hope you'll bear in mind that this is the first beta release of offline access, and we expect that there will be issues.

We're releasing this as an MVP to our Premium users so that we can iterate on it based on your feedback. This solution will allow you to read our content offline on any device, without having to download a specialized app.

You can now access this feature in the reader in SitePoint Premium. You will need to use a modern browser, as we're running service workers and indexDB to enable this feature.

Downloading a book is a two-stage process:

Click the download toggle as shown in the screenshot below, which will save the book to be accessed offline. You will then need to save the page, either natively in the browser or via a bookmark to be able to access the book while offline.

Please try it out and give us your feedback. There's a dedicated thread for feedback over on the SitePoint Community, which you can access with your existing SitePoint Premium account.

Keep your eye on this feature. We're working to release a new version soon, which will make it easier to see which titles you have downloaded for offline access.

Head to the library and test the offline reader Head over to our feedback thread

The post Go off Grid: Offline Reader for SitePoint Premium Now in Beta appeared first on SitePoint.

Getting Started with GraphQL and React Native

Oct 9, 2019


In 2012, Facebook engineer Nick Schrock started work on a small prototype to facilitate moving away from an old, unsupported partner API that powered the current Facebook News Feed. At the time, this was called “SuperGraph”. Fast forward to today and SuperGraph has helped shape the open-source query language GraphQL, which has been much of the buzzword in recent times.

Facebook describes GraphQL as a “query language for APIs and a runtime for fulfilling those queries with your existing data”. Put simply, GraphQL is an alternative to REST that has been steadily gaining popularity since its release. Whereas with REST a developer would usually collate data from a series of endpoint requests, GraphQL allows the developer to send a single query to the server that describes the exact data requirement.


For this tutorial, you’ll need a basic knowledge of React Native and some familiarity with the Expo environment. You’ll also need the Expo client installed on your mobile device or a compatible simulator installed on your computer. Instructions on how to do this can be found here.

Project Overview

In this tutorial, we’re going to demostrate the power of GraphQL in a React Native setting by creating a simple coffee bean comparison app. So that you can focus on all of the great things GraphQL has to offer, I’ve put together the base template for the application using Expo.

A mockup of our coffee comparison app

To get started, you can clone this repo and navigate to the “getting-started” branch, which includes all of our basic views to start adding our GraphQL data to, as well as all of our initial dependencies, which at this stage are:

{ "expo": "^32.0.0", "react": "16.5.0", "react-native": "", "react-navigation": "^3.6.1" }

To clone this branch, you’ll need to open up terminal and run this command:

git clone

To then navigate to the getting-started branch, you move into the newly cloned repo with cd graphql-coffee-comparison and run git checkout getting-started.

The next stage is to install our dependencies. To do this, make sure you’re on Node v11.10.1 and run npm install in the root directory of the project. This will add all of the dependencies listed above to your node_modules folder.

To start adding GraphQL to our React Native app, we’re going to need to install a few more dependencies that help us perform a few simple GraphQL functions. As is common with modern JavaScript development, you don’t need all of these dependencies to complete the data request, but they certainly help in giving the developer a better chance of structuring some clean, easy-to-read code. The dependencies you’ll need can be installed by running npm install --save apollo-boost react-apollo graphql-tag graphql.

Here’s an overview of what these dependencies are:

apollo-boost: a zero-configuration way of getting started with GraphQL in React/React Native react-apollo: this provides an integration between GraphQL and the Apollo client graphql-tag: a template literal tag that parses GraphQL queries graphql: the JavaScript reference implementation for GraphQL

Once all of the necessary dependencies have finished installing, run npm start. You should now see your familiar Expo window, and if you launch the app (either via a simulator or on a device) then you should see a screen similar to this:

A mockup of our getting started page

In basic terms, this application has two screens that are managed by react-navigation, Home.js and CoffeePage.js. The Home screen contains a simple FlatList that renders all of the coffee beans supplied to its data field. When clicked on, the user is navigated to the CoffeePage for that item, which displays more information about the product. It’s our job to now populate these views with interesting data from GraphQL.

The complete coffee page

Apollo Server Playground

There are two main elements to any successful GraphQL transaction: the server holding the data, and the front-end query making the request. For the purposes of this tutorial, we aren’t going to start delving into the wonderful world of server-side code, so I’ve created our server for us ready to go. All you need to do is navigate to in your favorite browser and leave it running throughout the course of development. For those interested, the server itself is running using apollo-server and contains just enough code to hold the data we need and serve it upon receiving an appropriate query. For further reading, you can head over to to read more about apollo-server.

GraphQL Query Basics

Before we get into writing the actual code that’s going to request the data we need for our coffee bean comparison app, we should understand just how GraphQL queries work. If you already know how queries work or just want to get started with coding, you can skip ahead to the next section.

Note: these queries won’t work with our codesandbox server, but feel free to create your own at if you’d like to test out the queries.

At its simplest level, we can use a flat structure for our queries when we know the shape of the data we’re requesting:

QUERY: RESPONSE: { { coffee { "coffee": { blend "blend": "rich" } } } }

On the left, we see the GraphQL query requesting the blend field from coffee. This works well when we know exactly what our data structure is, but what about when things are less transparent? In this example, blend returns us a string, but queries can be used to request objects as well:

QUERY: RESPONSE: { { coffee { "coffee": { beans { "beans": [ blend { } blend: "rich" } }, } { blend: "smooth" } ] } }

Here you can see we are simply requesting the beans object, with only the field blend being returned from that object. Each object in the beans array may very well contain other data other than blend, but GraphQL queries help us request only the data we need, cutting out any extra information that’s not necessary for our application.

So what about when we need to be more specific than this? GraphQL provides the capability for many things, but something that allows for extremely powerful data requests is the ability to pass arguments in your query. Take the following example:

QUERY: RESPONSE: { { coffee(companyId: "2") { "coffee": { beans { "beans": [ blend { } blend: "rich" } }, } { blend: "smooth" } ] } }

What we see is that we can pass an argument — in this case, the companyId — which ensures that we are only returned beans from one particular company. With REST, you can pass a single set of arguments via query params and URL segments, but with GraphQL querying every single field, it can get its own set of arguments. This allows GraphQL to be a dynamic solution for making multiple API fetches per request.

The post Getting Started with GraphQL and React Native appeared first on SitePoint.

macOS Catalina: 5 Things Web Developers & Designers Should Know

Oct 8, 2019


macOS Catalina is here and available for download, and you've no doubt heard all about the breakup of iTunes and the new consumer-oriented entertainment apps shipping with the system.

But what do developers, designers, and other tech professionals need to know? We run through the key points.

32-bit Support Ends with Catalina

Are you relying on some older, obscure native app for a specific function, as so many developers and designers do? Your Catalina update could throw you a wildcard: it's the first macOS release that drops support for 32-bit apps.

During the setup process, you'll be given a list of installed apps that will no longer open after the update. If you want to keep using that tool, it's time to hit up the developer for a long-overdue update... or stay on Mojave for a while longer yet.

A Cross-Platform Catalyst

Mojave brought iOS ports of the News, Stocks, Voice Memos and Home apps to macOS. In Catalina, Apple is opening the tools that enabled these ports up to developers under the name of Catalyst.

While this doesn't directly affect web development work, it does make iOS a more attractive native development platform, which may inform your future platform choices. And if Apple's plan to reinvigorate stale macOS third-party app development with some of the action from iOS works, you could incorporate better productivity and development apps into your workflow in the near future.

For now, Catalyst is available to developers of iPad apps — we expect that to broaden in the future.

Voice Control

Catalina offers accessibility improvements in the form of improved Voice Control for those who have difficulty seeing, or using keyboards and mice.

Of course, developers should ensure that their apps work as well as they can with this tool, because it's the right thing to do.

Developers are known for their love of keyboard shortcut mastery, but no doubt the ability to create custom commands has inspired determined lifehackers. What if you never had to take your cursor or eyes off of VS Code to run other frequent workflows?

We look forward to seeing what the community comes up with.

Screen Time

Do you waste too much time using your computer for mindless entertainment, forcing you to stay up late making up the time productively?

Or are you a workaholic who just can't find the will to shut off and disconnect?

If you're like most of us in the industry, you're a mix of the two. Catalina introduces a variant of the Screen Time app that's been on iOS for a couple of years now.

Screen Time for macOS provides you with visual analytics that help you understand the way you're spending time on your device, which can often lead to some unexpected epiphanies. It also lets you schedule downtime, forcing you off the computer and into the real world at the right time.

As with iOS, you can also set time limits for specific apps, and there are some ways to moderate your web content usage without outright blocking your web browser from opening.

Sidecar: The Most Expensive Secondary Display You'll Ever Own

For developers, designers, and all other web professionals, the real headline feature of Catalina is Sidecar. Sidecar turns your iPad into a secondary display for your Mac, and it's really easy to enable (provided you have the requisite tablet, which is not included with the operating system update).

The best reason to use Sidecar over a standard display is Apple Pencil integration. Designers will love the ability to draw directly on the screen when using Sketch and Illustrator without switching devices all the time. You can even mirror your Mac's screen if you'd like an unobstructed view of what you're sketching on one side.

Most of us will use Sidecar as a place to dump Slack or a terminal window, but in any case, it's clear it'll be the most beneficial update for many of us.

How'd You Go?

Let us know how you went with the upgrade, and what you've enjoyed most so far. We always recommend waiting a few days for the bugs to shake out — especially with Apple's recent track record — but initial reports suggest the release version is pretty solid after all.

The post macOS Catalina: 5 Things Web Developers & Designers Should Know appeared first on SitePoint.

9 of the Best Animation Libraries for UI Designers

Oct 8, 2019


This is the latest update to our guide to helping you choose the right animation library for each task. We're going to run-through 9 free, well-coded animation libraries best-suited to UI design work — covering their strengths and weaknesses, and when to choose each one.

Take your CSS animations to the next level with our Animating with CSS course by Donovan Hutchinson, the man behind CSS Animation Rocks.

Front-end web design has been through a revolution in the last decade. In the late naughties, most of us were still designing static magazine layouts. Nowadays, we're building “digital machines” with thousands of resizing, coordinated, moving parts.

Quite simply, great UI designers need to be great animators too — with a solid working understanding of web animation techniques.

Keep in mind that we're looking at each library from the perspective of a code-savvy UI designer, not as a “code guru” developer. Some of these libraries are pure CSS. Others are JavaScript, but none require anything more than basic HTML/CSS understanding to be useful. Link the library; add a CSS class.

Quite simply, great UI designers need to be great animators with a rock-solid understanding of the underlying tech.

This is the latest update to our guide to helping you choose the right animation library for each task. We're going to run-through 9 free, well-coded animation libraries best-suited to UI design work – their strengths and weaknesses and when to choose each one.

Some are pure CSS. Others are JavaScript, but none require anything more than basic HTML/CSS understanding to be used.


The 2017 Top 9 Animation Libraries List Animate.css Bounce.js AnimeJS Magic Animations DynCSS CSShake Hover.CSS Velocity.js AniJS Animate.css

Animate.css is one of the smallest and most easy-to-use CSS animation libraries available. Applying the Animate library to your project is as simple as adding the required CSS classes to your HTML elements. You can also use jQuery to call the animations on a particular event.


Creators: Daniel Eden Released: 2013 Current Version: 3.5.2 Most Recent Update: April 2017 Popularity: 41,000+ stars on GitHub Description: "A cross-browser library of CSS animations. As easy to use as an easy thing." Library Size: 43 kB GitHub: License: MIT

As of mid-2017, it still one of the most popular and widely-used CSS animation libraries and its minified file is small enough (16.6kb) for inclusion in mobile websites as well. It has 41,000 stars on Github and is used as a component in many larger projects.

Animate.css is still under active development after 4 years. We feel that this is one of the simplest and most robust animation libraries and would definitely recommend you to use this in your next project.


Bounce.js is a tool and javascript library that focusses on providing a selection of unique bouncy CSS animations to your website.


This project is open-source with its code on GitHub.

Creators: Tictail Released: 2014 Current Version: 0.8.2 Most Recent Update: Feb 2015 Popularity: 4,967+ stars on GitHub Description: "Create beautiful CSS3 powered animations in no time." Library Size: 16 kB GitHub: License: MIT

Bounce.js is a neat animation library shipped with about ten animation 'pre-sets' – hence the small size of the library. As with animate.css, the animations are smooth and flawless. You might want to consider using this library if your needs focus on 'pop and bubble' animation types and could benefit from a lower file size overhead.


AnimeJS is described as a lightweight JavaScript animation library that 'works with any CSS Properties, individual CSS transforms, SVG or any DOM attributes, and JavaScript Objects'. It's pretty awesome – so awesome in fact, that the GIF capture I took below can't do justice to how smooth and buttery the motion is.


This project is available on GitHub.

Creator: Julian Garnier Released: 2016 Current Version: 2.0.2 Most Recent Update: March 2017 Popularity: 12,222+ stars on GitHub Description: "JavaScript Animation Engine." Library Size: 10.9kB GitHub: License: MIT

AnimeJS is only newcomer to our list but has won a lot of converts in the 12 months since it's creation. It's incredibly versatile and powerful and wouldn't be out of place being used within HTML games. The only real question is 'is it overkill for simple web apps'?

Maybe, but as its fast, small and relatively easy to learn, it's hard to find fault with it.

Magic Animations

Magic Animations has been one impressive animation libraries available. It has many different animations, many of which are quite unique to this library. As with Animate.css, you can implement Magic by simply importing the CSS file. You can also make use of the animations from jQuery. This project offers a particularly cool demo application.

Magic Animations

The post 9 of the Best Animation Libraries for UI Designers appeared first on SitePoint.

Create a Cron Job on AWS Lambda

Oct 3, 2019


Create a Cron Job on AWS Lambda

Cron jobs are really useful tools in any Linux or Unix-like operating systems. They allow us to schedule scripts to be executed periodically. Their flexibility makes them ideal for repetitive tasks like backups and system cleaning, but also data fetching and data processing.

For all the good things they offer, cron jobs also have some downsides. The main one is that you need a dedicated server or a computer that runs pretty much 24/7. Most of us don't have that luxury. For those of us who don't have access to a machine like that, AWS Lambda is the perfect solution.

AWS Lambda is an event-driven, serverless computing platform that's a part of the Amazon Web Services. It’s a computing service that runs code in response to events and automatically manages the computing resources required by that code. Not only is it available to run our jobs 24/7, but it also automatically allocates the resources needed for them.

Setting up a Lambda in AWS involves more than just implementing a couple of functions and hoping they run periodically. To get them up and running, several services need to be configured first and need to work together. In this tutorial, we'll first go through all the services we'll need to set up, and then we'll implement a cron job that will fetch some updated cryptocurrency prices.

Understanding the Basics

As we said earlier, some AWS services need to work together in order for our Lambda function to work as a cron job. Let's have a look at each one of them and understand their role in the infrastructure.

S3 Bucket

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services' (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata. — TechTarget

Every Lambda function needs to be prepared as a “deployment package”. The deployment package is a .zip file consisting of the code and any dependencies that code might need. That .zip file can then be uploaded via the web console or located in an S3 bucket.

IAM Role

An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. — Amazon

We’ll need to manage permissions for our Lambda function with IAM. At the very least it should be able to write logs, so it needs access to CloudWatch Logs. This is the bare minimum and we might need other permissions for our Lambda function. For more information, the AWS Lambda permissions page has all the information needed.

CloudWatch Events Rule

CloudWatch Events support cron-like expressions, which we can use to define how often an event is created. We'll also need to make sure that we add our Lambda function as a target for those events.

Lambda Permission

Creating the events and targeting the Lambda function isn’t enough. We'll also need to make sure that the events are allowed to invoke our Lambda function. Anything that wants to invoke a Lambda function needs to have explicit permission to do that.

These are the building blocks of our AWS Lambda cron job. Now that we have an idea of all the moving parts of our job, let's see how we can implement it on AWS.

Implementing a Cron Job on AWS

A lot of the interactions we described earlier are taken care of by Amazon automatically. In a nutshell, all we need to do is to implement our service (the actual lambda function) and add rules to it (how often and how the lambda will be executed). Both permissions and roles are taken care of by Amazon; the defaults provided by Amazon are the ones we'll be using.

Lambda function

First, let's start by implementing a very simple lambda function. In the AWS dashboard, use the Find Services function to search by lambda. In the lambda console, select Create a function. At this point, we should be in Lambda > Functions > reate Function.

To get things going, let's start with a static log message. Our service will only be a print function. For this, we'll use Node.js 10x as our runtime language. Give it a function name, and on Execution Role let's stay with Create a new role with basic lambda permissions. This is a basic set of permissions on IAM that will allow us to upload logs to Amazon CloudWatch logs. Click Create Function.

Create a new lambda function

Our function is now created with an IAM Role. In the code box, substitute the default code with the following:

exports.handler = async (event) => { console.log("Hello Sitepoint Reader!"); return {}; };

To check if the code is executing correctly, we can use the Test function. After giving a name to our test, it will execute the code and show its output in the Execution Result field just below our code.

If we test the code above we can see that we have no response, but in the function logs, we can see we have our message printed. This indicates that our service is running correctly so we can proceed with our cron implementation.

The post Create a Cron Job on AWS Lambda appeared first on SitePoint.

Cloning Tinder Using React Native Elements and Expo

Oct 1, 2019


Cloning Tinder Using React Native Elements and Expo

Making pixel-perfect layouts on mobile is hard. Even though React Native makes it easier than its native counterparts, it still requires a lot of work to get a mobile app to perfection.

In this tutorial, we’ll be cloning the most famous dating app, Tinder. We’ll then learn about a UI framework called React Native Elements, which makes styling React Native apps easy.

Since this is just going to be a layout tutorial, we’ll be using Expo, as it makes setting things up much easier than plain old react-native-cli. We’ll also be making use of a lot of dummy data to make our app.

We’ll be making a total of four screens—Home, Top Picks, Profile, and Messages.


For this tutorial, you need a basic knowledge of React Native and some familiarity with Expo. You’ll also need the Expo client installed on your mobile device or a compatible simulator installed on your computer. Instructions on how to do this can be found here.

You also need to have a basic knowledge of styles in React Native. Styles in React Native are basically an abstraction similar to that of CSS, with just a few differences. You can get a list of all the properties in the styling cheatsheet.

Throughout the course of this tutorial we’ll be using yarn. If you don’t have yarn already installed, install it from here.

Also make sure you’ve already installed expo-cli on your computer.

If it’s not installed already, then go ahead and install it:

$ yarn global add expo-cli

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 11.14.0 npm 6.4.1 yarn 1.15.2 expo 2.16.1

Make sure to update expo-cli if you haven’t updated in a while, since expo releases are quickly out of date.

We’re going to build something that looks like this:

Tinder Demo in Expo

If you just want to clone the repo, the whole code can be found on GitHub.

Getting Started

Let’s set up a new Expo project using expo-cli:

$ expo init expo-tinder

It will then ask you to choose a template. You should choose tabs and hit Enter.

Expo Init - Choose A Template

Then it will ask you to name the project. Type expo-tinder and hit Enter again.

Expo Init - Name the Project

Lastly, it will ask you to press y to install dependencies with yarn or n to install dependencies with npm. Press y.

Expo Init - Install the dependencies

This bootstraps a brand new React Native app using expo-cli.

React Native Elements

React Native Elements is a cross-platform UI Toolkit for React Native with consistent design across Android, iOS and Web.

It’s easy to use and completely built with JavaScript. It’s also the first UI kit ever made for React Native.

It allows us to fully customize styles of any of our components the way we want so every app has its own unique look and feel.

It’s also open source and backed by a community of awesome developers.

You can build beautiful applications easily.

React Native Elements Demo

Cloning Tinder UI

We’ve already created a project named expo-tinder.

To run the project, type this:

$ yarn start

Press i to run the iOS Simulator. This will automatically run the iOS Simulator even if it’s not opened.

Press a to run the Android Emulator. Note that the emulator must be installed and started already before typing a. Otherwise it will throw an error in the terminal.

It should look like this:

Expo Tabs App

The post Cloning Tinder Using React Native Elements and Expo appeared first on SitePoint.

How to Build a News App with Svelte

Sep 27, 2019


How to Build a News App with Svelte

Svelte is a new JavaScript UI library that's similar in many ways to modern UI libraries like React. One important difference is that it doesn't use the concept of a virtual DOM.

In this tutorial, we'll be introducing Svelte by building a news application inspired by the Daily Planet, a fictional newspaper from the Superman world.

About Svelte

Svelte makes use of a new approach to building users interfaces. Instead of doing the necessary work in the browser, Svelte shifts that work to a compile-time phase that happens on the development machine when you're building your app.

In a nutshell, this is how Svelte works (as stated in the official blog):

Svelte runs at build time, converting your components into highly efficient imperative code that surgically updates the DOM. As a result, you're able to write ambitious applications with excellent performance characteristics.

Svelte is faster than the most powerful frameworks (React, Vue and Angular) because it doesn't use a virtual DOM and surgically updates only the parts that change.

We'll be learning about the basic concepts like Svelte components and how to fetch and iterate over arrays of data. We'll also learn how to initialize a Svelte project, run a local development server and build the final bundle.


You need to have a few prerequisites, so you can follow this tutorial comfortably, such as:

Familiarity with HTML, CSS, and JavaScript (ES6+), Node.js and npm installed on your development machine.

Node.js can be easily installed from the official website or you can also use NVM for easily installing and managing multiple versions of Node in your system.

We'll be using a JSON API as a source of the news for our app, so you need to get an API key by simply creating an account for free and taking note of your API key.

Getting Started

Now, let's start building our Daily Planet news application by using the degit tool for generating Svelte projects.

You can either install degit globally on your system or use the npx tool to execute it from npm. Open a new terminal and run the following command:

npx degit sveltejs/template dailyplanetnews

Next, navigate inside your project's folder and run the development server using the following commands:

cd dailyplanetnews npm run dev

Your dev server will be listening from the http://localhost:5000 address. If you do any changes, they'll be rebuilt and live-reloaded into your running app.

Open the main.js file of your project, and you should find the following code:

import App from './App.svelte'; const app = new App({ target: document.body, props: { name: 'world' } }); export default app;

This is where the Svelte app is bootstrapped by creating and exporting an instance of the root component, conventionally called App. The component takes an object with a target and props attributes.

The target contains the DOM element where the component will be mounted, and props contains the properties that we want to pass to the App component. In this case, it's just a name with the world value.

Open the App.svelte file, and you should find the following code:

<script> export let name; </script> <style> h1 { color: purple; } </style> <h1>Hello {name}!</h1>

This is the root component of our application. All the other components will be children of App.

Components in Svelte use the .svelte extension for source files, which contain all the JavaScript, styles and markup for a component.

The export let name; syntax creates a component prop called name. We use variable interpolation—{...}—to display the value passed via the name prop.

You can simply use plain old JavaScript, CSS, and HTML that you are familiar with to create Svelte components. Svelte also adds some template syntax to HTML for variable interpolation and looping through lists of data, etc.

Since this is a small app, we can simply implement the required functionality in the App component.

In the <script> tag, import the onMount() method from "svelte" and define the API_KEY, articles, and URL variables which will hold the news API key, the fetched news articles and the endpoint that provides data:

<script> export let name; import { onMount } from "svelte"; const API_KEY = "<YOUR_API_KEY_HERE>"; const URL = `${API_KEY}`; let articles = []; </script>

onMount is a lifecycle method. Here’s what the official tutorial says about that:

Every component has a lifecycle that starts when it is created and ends when it is destroyed. There are a handful of functions that allow you to run code at key moments during that lifecycle. The one you'll use most frequently is onMount, which runs after the component is first rendered to the DOM.

Next, let's use the fetch API to fetch data from the news endpoint and store the articles in the articles variable when the component is mounted in the DOM:

<script> // [...] onMount(async function() { const response = await fetch(URL); const json = await response.json(); articles = json["articles"]; }); </script>

Since the fetch() method returns a JavaScript Promise, we can use the async/await syntax to make the code look synchronous and eliminate callbacks.

The post How to Build a News App with Svelte appeared first on SitePoint.

Real-time Location Tracking with React Native and PubNub

Sep 25, 2019


Building a Real-time Location Tracking App with React Native and PubNub

With ever-increasing usage of mobile apps, geolocation and tracking functionality can be found in a majority of apps. Real-time geolocation tracking plays an important role in many on-demand services, such as these:

taxi services like Uber, Lyft or Ola food Delivery services like Uber Eats, Foodpanda or Zomato monitoring fleets of drones

In this guide, we’re going use React Native to create a real-time location tracking apps. We’ll build two React Native apps. One will act as a tracking app (called “Tracking app”) and the other will be the one that’s tracked (“Trackee app”).

Here’s what the final output for this tutorial will look like:

[video width="640" height="480" mp4=""][/video]


This tutorial requires a basic knowledge of React Native. To set up your development machine, follow the official guide here.

Apart from React Native, we’ll also be using PubNub, a third-party service that provides real-time data transfer and updates. We’ll use this service to update the user coordinates in real time.

Register for a free PubNub account here.

Since we’ll be using Google Maps on Android, we’ll also need a Google Maps API key, which you can obtain on the Google Maps Get API key page.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node v10.15.0 npm 6.4.1 yarn 1.16.0 react-native 0.59.9 react-native-maps 0.24.2 pubnub-react 1.2.0 Getting Started

If you want to have a look at the source code of our Tracker and Trackee apps right away, here are their GitHub links:

Trackee App repo Tracker App repo

Let’s start with the Trackee app first.

Trackee App

To create a new project using react-native-cli, type this in the terminal:

$ react-native init trackeeApp $ cd trackeeApp

Now let’s get to the fun part — the coding.

Add React Native Maps

Since we’ll be using Maps in our app, we’ll need a library for this. We’ll use react-native-maps.

Install react-native-maps by following the installation instructions here.

Add PubNub

Apart from maps, we’ll also install the PubNub React SDK to transfer our data in real time:

$ yarn add pubnub-react

After that, you can now run the app:

$ react-native run-ios $ react-native run-android

You should see something like this on your simulator/emulator:

Trackee App

The post Real-time Location Tracking with React Native and PubNub appeared first on SitePoint.

How to Build Your First Telegram Chatbot with Node.js

Sep 18, 2019


So, this morning you woke up with the idea to develop a way to store and label interesting articles you've read. After playing with the idea, you figure a Telegram chatbot is the most convenient solution for this problem.

In this guide, we'll walk you through everything you need to know to build your first Telegram chatbot using JavaScript and Node.js.

To get started, we have to register our new bot with the so-called Botfather to receive our API access token.

Bot Registration with @BotFather

The first step towards our very own Telegram bot is registering the bot with the BotFather. The BotFather is a bot itself that makes your life much easier. It helps you with registering bots, changing the bot description, adding commands, and providing you with the API token for your bot.

The API token is the most important step, as this allows you to run the code that can perform tasks for the bot.

1. Finding the BotFather

The BotFather can be found on Telegram by searching for 'BotFather'. Click on the official BotFather, indicated with the white checkmark icon in the blue circle.

2. Registering a New Bot

Now we've found BotFather, let’s talk to him! You can start the conversation by typing /newbot. BotFather will ask you to choose a name for your both. This name can be anything and doesn’t have to be unique. To keep things simple, I named my bot ArticleBot.

Next, you will be prompted to input a username for the bot. The username must be unique and end in bot. Therefore, I chose michiel_article_bot, as that username was not yet taken. This will also be the username you use for looking up the bot in Telegram's search field.

FatherBot will return a success message with your token to access the Telegram HTTP API. Make sure to store this token safely, and certainly don't share it with anyone else.

3. Modifying the Bot

We can further modify the bot by adding a description or setting the commands we wish the bot to know. You can message the bot with the text /setcommands. It will show you how to input the commands with the format command1 - Description.

The post How to Build Your First Telegram Chatbot with Node.js appeared first on SitePoint.

How to Build Unique, Beautiful Websites with Tailwind CSS

Sep 13, 2019


Build Unique and Beautiful Web Sites with Tailwind CSS

When thinking about what CSS framework to use for a new project, options like Bootstrap and Foundation readily jump to mind. They’re tempting to use because of their ready-to-use, pre-designed components, which developers can use with ease right away. This approach works well with relatively simple websites with a common look and feel. But as soon as we start building more complex, unique sites with specific needs, a couple of problems arise.

At some point, we need to customize certain components, create new components, and make sure the final codebase is unified and easy to maintain after the changes.

It's hard to satisfy the above needs with frameworks like Bootstrap and Foundation, which give us a bunch of opinionated and, in many cases, unwanted styles. As a result, we have to continuously solve specificity issues while trying to override the default styles. It doesn't sound like a fun job, does it?

Ready-to-use solutions are easy to implement, but inflexible and confined to certain boundaries. On other hand, styling web sites without a CSS framework is powerful and flexible, but isn’t easy to manage and maintain. So, what’s the solution?

The solution, as always, is to follow the golden middle. We need to find and apply the right balance between the concrete and abstract. A low-level CSS framework offers such a balance. There are several frameworks of this kind, and in this tutorial, we'll explore the most popular one, Tailwind CSS.

What Is Tailwind?

Tailwind is more than a CSS framework, it's an engine for creating design systems. — Tailwind website

Tailwind is a collection of low-level utility classes. They can be used like lego bricks to build any kind of components. The collection covers the most important CSS properties, but it can be easily extended in a variety of ways. With Tailwind, customization isn’t pain in the neck anymore. The framework has great documentation, covering every class utility in detail and showing the ways it can be customized. All modern browsers, and IE11+, are supported.

Why Using Utility-first Framework?

A low-level, utility-first CSS framework like Tailwind has a plenty of benefits. Let's explore the most significant of them:

You have greater control over elements' appearance. We can change and fine-tune an element's appearance much more easily with utility classes. It's easy to manage and maintain in large projects, because you only maintain HTML files, instead of a large CSS codebase. It's easier to build unique, custom website designs without fighting with unwanted styles. It's highly customizable and extensible, which gives us unlimited flexibility. It has a mobile-first approach and easy implementation of responsive design patterns. There's the ability to extract common, repetitive patterns into custom, reusable components — in most cases without writing a single line of custom CSS. It has self-explanatory classes. We can imagine how the styled element looks only by reading the classes.

Finally, as Tailwind's creators say:

it's just about impossible to think this is a good idea the first time you see it — you have to actually try it.

So, let's try it!

Getting Started with Tailwind

To demonstrate Tailwind's customization features, we need to install it via npm:

npm install tailwindcss

The next step is to create a styles.css file, where we include the framework styles using the @tailwind directive:

@tailwind base; @tailwind components; @tailwind utilities;

After that, we run the npx tailwind init command, which creates a minimal tailwind.config.js file, where we'll put our customization options during the development. The generated file contains the following:

module.exports = { theme: {}, variants: {}, plugins: [], }

The next step is to build the styles in order to use them:

npx tailwind build styles.css -o output.css

Finally, we link the generated output.css file and Font Awesome in our HTML:

<link rel="stylesheet" type="text/css" href="output.css"> <link rel="stylesheet" href="">

And now, we’re ready to start creating.

Building a One-page Website Template

In the rest of the tutorial, we'll build a one-page website template using the power and flexibility of Tailwind's utility classes.

Here you can see the template in action.

I'm not going to explain every single utility (which would be boring and tiresome) so I suggest you to use the Tailwind cheatsheet as a quick reference. It contains all available utilities with their effect, plus direct links to the documentation.

We'll build the template section by section. They are Header, Services, Projects, Team, and Footer.

We firstly wrap all section in a container:

<div class="container mx-auto"> <!-- Put the sections here --> </div> Header (Logo, Navigation)

The first section — Header — will contain a logo on the left side and navigation links on the right side. Here’s how it will look:

The site header

Now, let's explore the code behind it.

<div class="flex justify-between items-center py-4 bg-blue-900"> <div class="flex-shrink-0 ml-10 cursor-pointer"> <i class="fas fa-drafting-compass fa-2x text-orange-500"></i> <span class="ml-1 text-3xl text-blue-200 font-semibold">WebCraft</span> </div> <i class="fas fa-bars fa-2x visible md:invisible mr-10 md:mr-0 text-blue-200 cursor-pointer"></i> <ul class="hidden md:flex overflow-x-hidden mr-10 font-semibold"> <li class="mr-6 p-1 border-b-2 border-orange-500"> <a class="text-blue-200 cursor-default" href="#">Home</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Services</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Projects</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Team</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">About</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Contacts</a> </li> </ul> </div>

As you can see, the classes are pretty self-explanatory as I mentioned above. We'll explore only the highlights.

First, we create a flex container and center its items horizontally and vertically. We also add some top and bottom padding, which Tailwind combines in a single py utility. As you may guess, there’s also a px variant for left and right. We'll see that this type of shorthand is broadly used in many of the other utilities. As a background color, we use the darkest blue (bg-blue-900) from Tailwind's color palette. The palette contains several colors with shades for each color distributed from 100 to 900. For example, blue-100, blue-200, blue-300, etc.

In Tailwind, we apply a color to a property by specifying the property followed by the color and the shade number. For example, text-white, bg-gray-800, border-red-500. Easy peasy.

For the logo on the left side, we use a div element, which we set not to shrink (flex-shrink-0) and move it a bit away from the edge by applying the margin-left property (ml-10). Next we use a Font Awesome icon whose classes perfectly blend with those of Tailwind. We use one of them to make the icon orange. For the textual part of the logo, we use big, light blue, semi-bolded text, with a small offset to the right.

In the middle, we add an icon that will be visible only on mobile. Here we use one of the responsive breakpoint prefixes (md). Tailwind, like Bootstrap and Foundation, follows the mobile-first approach. This means that when we use utilities without prefix (visible), they apply all the way from the smallest to the largest devices. If we want different styling for different devices, we need to use the breakpoint prefixes. So, in our case the icon will be visible on small devices, and invisible (md:invisible) on medium and beyond.

At the right side we put the nav links. We style the Home link differently, showing that it’s the active link. We also move the navigation from the edge and set it to be hidden on overflow (overflow-x-hidden). The navigation will be hidden (hidden) on mobile and set to flex (md:flex) on medium and beyond.

You can read more about responsiveness in the documentation.


Let's now create the next section, Services. Here’s how it will look:

The Services section

And here’s the code:

<div class="w-full p-6 bg-blue-100"> <div class="w-48 mx-auto pt-6 border-b-2 border-orange-500 text-center text-2xl text-blue-700">OUR SERVICES</div> <div class="p-2 text-center text-lg text-gray-700">We offer the best web development solutions.</div> <div class="flex justify-center flex-wrap p-10"> <div class="relative w-48 h-64 m-5 bg-white shadow-lg"> <div class="flex items-center w-48 h-20 bg-orange-500"> <i class="fas fa-bezier-curve fa-3x mx-auto text-white"></i> </div> <p class="mx-2 py-2 border-b-2 text-center text-gray-700 font-semibold uppercase">UI Design</p> <p class="p-2 text-sm text-gray-700">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean ac est massa.</p> <div class="absolute right-0 bottom-0 w-8 h-8 bg-gray-300 hover:bg-orange-300 text-center cursor-pointer"> <i class="fas fa-chevron-right mt-2 text-orange-500"></i> </div> </div> ... </div> </div>

We create a section with light blue background. Then we add an underlined title and a subtitle.

Next, we use a flex container for the services items. We use flex-wrap so the items will wrap on resize. We set the dimensions for each card and add some space and a drop shadow. Each card has a colored section with a topic icon, a title, and a description. And we also put a button with an icon in the bottom-right corner.

Here we use one of the pseudo-class variants (hover, focus, etc.). They’re used in the same way as responsive breakpoints. We use the pseudo-class prefix, followed by a colon and the property name (hover:bg-orange-300).

You can learn more about pseudo-class variants in the documentation.

For brevity, I show the code only for the first card. The other ones are similar. You have to change only the colors, icons, and titles. See the final HTML file on GitHub repo for a reference.

The post How to Build Unique, Beautiful Websites with Tailwind CSS appeared first on SitePoint.

How to Automatically Optimize Responsive Images in Gatsby

Sep 11, 2019


Image optimization — at least in my experience — has always been a major pain when building speedy websites. Balancing image quality and bandwidth efficiency is a tough act without the right tools. Photo editing tools such as Photoshop are great for retouching, cropping and resizing bitmap images. Unfortunately, they are not that good at creating 100% optimized images for the web.

Luckily, we have extension packages for build tools that can optimize images for us quickly:

Gulp: gulp-imagemin Grunt: grunt-imagemin Webpack: imagemin-webpack Parcel: parcel-plugin-imagemin

Unfortunately, image optimization alone is not enough. You need to make sure that the entire website is responsive and looks great at all screen sizes. This can easily be done through CSS, but here lies the problem:

Should you optimize your image for large screens or small screens?

If the majority of your audience is using mobile devices to access your site, then the logical choice is to optimize images for small screens. However, it's likely that a significant source of revenue is coming from visitors with large screens over 17". You definitely wouldn't want to neglect them.

Luckily, we have technology that allows us to deliver optimized responsive images for different screen sizes. This means we need to generate multiple optimized images with different resolutions fit for specific screen sizes or responsive breakpoints.

For WordPress site owners, this kind of image optimization requires the use of a plugin and a third-party service. The creation of these responsive images cannot be done on the hosting server without significantly slowing down the site for users, hence the need for a third-party service.

If you are using Gatsby to run your website, then you are in luck. This feature is built-in and already configured for you to optimize your responsive images. You just need to drop in some images and write a bit of code to link up your responsive images with your web page. When you run the gatsby build command, the images are optimized for you. This saves you from requiring a third-party service to perform the optimization for you. It's simply done on your deployment machine.

In the subsequent sections, we are going to learn:

How image optimization works in Gatsby How to optimize images on a web page How to optimize images in a Markdown post Prerequisites

Before we start, I would like to note that this tutorial is for developers who are just starting with Gatsby and would like to learn specifically about how to handle images. I am going to assume you already have a good foundation in the following topics:

The post How to Automatically Optimize Responsive Images in Gatsby appeared first on SitePoint.

Create an Offline-first React Native App Using WatermelonDB

Sep 10, 2019


React Native has different database storage mechanisms for different mobile app purposes. Simple structures — such as user settings, app settings, and other key-value pair data — can be handled easily using async storage or secure storage.

Other applications — such as Twitter clones — fetch data from the server and directly show it to the user. They maintain a cache of data, and if a user needs to interact with any document, they call the APIs directly.

So not all the applications require a database.

When We Need a Database

Applications such as the Nozbe (a to-do app), Expense (a tracker), and SplitWise (for in-app purchases), need to work offline. And to do so, they need a way to store data locally and sync it up with the server. This type of application is called an offline first app. Over time, these apps collect a lot of data, and it becomes harder to manage that data directly — so a database is needed to manage it efficiently.

Options in React Native

When developing an app, choose the database that best fits your requirements. If two options are available, then go with the one that has better documentation and quicker response to issues. Below are some of the best known options available for React Native:

WatermelonDB: an open-source reactive database that can be used with any underlying database. By default, it uses SQLite as the underlying database in React Native. SQLite (React Native, Expo): the oldest, most used, battle-tested and well-known solution. It’s available for most of the platforms, so if you’ve developed an application in another mobile app development framework, you might already be familiar with it. Realm (React Native): an open-source solution, but it also has an enterprise edition with lots of other features. They have done a great job and many well-known companies use it. FireBase (React Native, Expo): a Google service specifically for the mobile development platform. It offers lots of functionality, storage being just one of them. But it does require you to stay within their ecosystem to utilize it. RxDB: a real-time database for the Web. It has good documentation, a good rating on GitHub (> 9K stars), and is also reactive. Prerequisites

I assume you have knowledge about basic React Native and its build process. We’re going to use react-native-cli to create our application.

I’d also suggest setting up an Android or iOS development environment while setting up the project, as you may face many issues, and the first step in debugging is keeping the IDE (Android Studio or Xcode) opened to see the logs.

Note: you can check out the official guide for installing dependencies here for more information. As the official guidelines are very concise and clear, we won’t be covering that topic here.

To set up a virtual device or physical device, follow these guides:

using a physical device using a virtual device

Note: there’s a more JavaScript-friendly toolchain named Expo. The React Native community has also started promoting it, but I haven’t come across a large-scale, production-ready application that uses Expo yet, and Expo port isn’t currently available for those using a database such as Realm — or in our case, WatermelonDB.

App Requirements

We’ll create a movie search application with a title, poster image, genre, and release date. Each movie will have many reviews.

The application will have three screens.

Home will show two buttons — one to generate dummy records, and a second to add new movie. Below it, there will be one search input that can be used to query movie titles from the database. It will show the list of movies below the search bar. If any name is searched, the list will only show the searched movies.

home screen view

Clicking on any movie will open a Movie Dashboard, from where all its reviews can be checked. A movie can be edited or deleted, or a new review can be added from this screen.

movie dashboard

The third screen will be Movie Form, which is used to create/update a movie.

movie form

The source code is available on GitHub.

Why We Chose WatermelonDB (features)

We need to create an offline-first application, so a database is a must.

Features of WatermelonDB

Let’s look at some of the features of WatermelonDB.

Fully observable
A great feature of WatermelonDB is its reactive nature. Any object can be observed using observables, and it will automatically rerender our components whenever the data changes. We don’t have to make any extra efforts to use WatermelonDB. We wrap the simple React components and enhance them to make them reactive. In my experience, it just works seamlessly, and we don’t have to care about anything else. We make the changes in the object and our job’s done! It’s persisted and updated at all the places in the application.

SQLite under the hood for React Native
In a modern browser, just-in-time compilation is used to improve speed, but it’s not available in mobile devices. Also, the hardware in mobile devices is slower than in computers. Due to all these factors, JavaScript apps run slower in a mobile application. To overcome this, WatermelonDB doesn’t fetch anything until it’s needed. It uses lazy loading and SQLite as an underlying database on a separate thread to provide a fast response.

Sync primitives and sync adapter
Although WatermelonDB is just a local database, it also provides sync primitives and sync adapters. It makes it pretty easy to use with any of our own back-end databases. We just need to conform to the WatermelonDB sync protocol on the back end and provide the endpoints.

Further features include:

Statically typed using Flow Available for all platforms Dev Env and WatermelonDB Setup (v0.0)

We’re going to use react-native-cli to create our application.

Note: you may be able to use it with ExpoKit or Ejecting from Expo.

If you want to skip this part then clone the source repo and checkout the v0.0 branch.

Start a new project:

react-native init MovieDirectory cd MovieDirectory

Install dependencies:

npm i @nozbe/watermelondb @nozbe/with-observables react-navigation react-native-gesture-handler react-native-fullwidth-image native-base rambdax

Below is the list of installed dependencies and their uses:

native-base: a UI library that will be used for look and feel of our app. react-native-fullwidth-image: for showing full-screen responsive images. (Sometimes it can be a pain to calculate the width, height and also maintain aspect ratio. So it’s better to use an existing community solution.) @nozbe/watermelondb: the database we’ll be using. @nozbe/with-observables: contains the decorators (@) that will be used in our models. react-navigation: used for Managing routes/screens react-native-gesture-handler: the dependency for react-navigation. rambdax: used to generate a random number while creating dummy data.

Open your package.json and replace the scripts with the following code:

"scripts": { "start": "node node_modules/react-native/local-cli/cli.js start", "start:ios": "react-native run-ios", "start:android": "react-native run-android", "test": "jest" }

This will be used to run our application in the respective device.

Set Up WatermelonDB

We need to add a Babel plugin to convert our decorators, so install it as a dev dependency:

npm install -D @babel/plugin-proposal-decorators

Create a new file .babelrc in the root of the project:

// .babelrc { "presets": ["module:metro-react-native-babel-preset"], "plugins": [["@babel/plugin-proposal-decorators", { "legacy": true }]] }

Now use the following guides for your target environment:

iOS Android

Open the android folder in Android Studio and sync the project. Otherwise, it will give you an error when running the application for the first time. Do the same if you’re targeting iOS.

Before we run the application, we need to link the react-native-gesture handler package, a dependency of react-navigation, and react-native-vector-icons, a dependency of native-base. By default, to keep the binary size of the application small, React Native doesn’t contain all the code to support native features. So whenever we need to use a particular feature, we can use the link command to add the native dependencies. So let’s link our dependencies:

react-native link react-native-gesture-handler react-native link react-native-vector-icons

Run the application:

npm run start:android # or npm run start:ios

If you get an error for missing dependencies, run npm i.

The code up to here is available under the v0.0 branch.

version 0


As we’ll be creating a database application, a lot of the code will be back-end only, and we won’t be able to see much on the front end. It might seem a long, but have patience and follow the tutorial till the end. You won’t regret it!

The WatermelonDB workflow can be categorized into three main parts:

Schema: used to define the database table schema. Models: the ORM mapped object. We’ll interact with these throughout our application. Actions: used to perform various CRUD operations on our object/row. We can directly perform an action using a database object or we can define functions in our model to perform these actions. Defining them in models is the better practice, and we’re going to use that only.

Let’s get started with our application.

Initialize DB Schema and WatermelonDB (v0.1)

We’ll define our schema, models and database object in our application. We won’t able to see much in the application, but this is the most important step. Here we’ll check that our application works correctly after defining everything. If anything goes wrong, it will be easy to debug it at this stage.

Project Structure

Create a new src folder in the root. This will be the root folder for all of our React Native code. The models folder is used for all of our database-related files. It will behave as our DAO (Data Access Object) folder. This is a term used for an interface to some type of database or other persistence mechanism. The components folder will have all of our React components. The screens folder will have all the screens of our application.

mkdir src && cd src mkdir models mkdir components mkdir screens Schema

Go to the models folder, create a new file schema.js, and use the following code:

// schema.js import { appSchema, tableSchema } from "@nozbe/watermelondb"; export const mySchema = appSchema({ version: 2, tables: [ tableSchema({ name: "movies", columns: [ { name: "title", type: "string" }, { name: "poster_image", type: "string" }, { name: "genre", type: "string" }, { name: "description", type: "string" }, { name: "release_date_at", type: "number" } ] }), tableSchema({ name: "reviews", columns: [ { name: "body", type: "string" }, { name: "movie_id", type: "string", isIndexed: true } ] }) ] });

We’ve defined two tables — one for movies, and another for its reviews. The code itself self-explanatory. Both tables have related columns.

Note that, as per WatermelonDB’s naming convention, all the IDs end with an _id suffix, and the date field ends with the _at suffix.

isIndexed is used to add an index to a column. Indexing makes querying by a column faster, at the slight expense of create/update speed and database size. We’ll be querying all the reviews by movie_id, so we should mark it as indexed. If you want to make frequent queries on any boolean column, you should index it as well. However, you should never index date (_at) columns.


Create a new file models/Movie.js and paste in this code:

// models/Movie.js import { Model } from "@nozbe/watermelondb"; import { field, date, children } from "@nozbe/watermelondb/decorators"; export default class Movie extends Model { static table = "movies"; static associations = { reviews: { type: "has_many", foreignKey: "movie_id" } }; @field("title") title; @field("poster_image") posterImage; @field("genre") genre; @field("description") description; @date("release_date_at") releaseDateAt; @children("reviews") reviews; }

Here we’ve mapped each column of the movies table with each variable. Note how we’ve mapped reviews with a movie. We’ve defined it in associations and also used @children instead of @field. Each review will have a movie_id foreign key. These review foreign key values are matched with id in the movie table to link the reviews model to the movie model.

For date also, we need to use the @date decorator so that WatermelonDB will give us the Date object instead of a simple number.

Now create a new file models/Review.js. This will be used to map each review of a movie.

// models/Review.js import { Model } from "@nozbe/watermelondb"; import { field, relation } from "@nozbe/watermelondb/decorators"; export default class Review extends Model { static table = "reviews"; static associations = { movie: { type: "belongs_to", key: "movie_id" } }; @field("body") body; @relation("movies", "movie_id") movie; }

We have created all of our required models. We can directly use them to initialize our database, but if we want to add a new model, we again have to make a change where we initialize the database. So to overcome this, create a new file models/index.js and add the following code:

// models/index.js import Movie from "./Movie"; import Review from "./Review"; export const dbModels = [Movie, Review];

Thus we only have to make changes in our models folder. This makes our DAO folder more organized.

Initialize the Database

Now to use our schema and models to initialize our database, open index.js, which should be in the root of our application. Add the code below:

// index.js import { AppRegistry } from "react-native"; import App from "./App"; import { name as appName } from "./app.json"; import { Database } from "@nozbe/watermelondb"; import SQLiteAdapter from "@nozbe/watermelondb/adapters/sqlite"; import { mySchema } from "./src/models/schema"; import { dbModels } from "./src/models/index.js"; // First, create the adapter to the underlying database: const adapter = new SQLiteAdapter({ dbName: "WatermelonDemo", schema: mySchema }); // Then, make a Watermelon database from it! const database = new Database({ adapter, modelClasses: dbModels }); AppRegistry.registerComponent(appName, () => App);

We create an adapter using our schema for the underlying database. Then we pass this adapter and our dbModels to create a new database instance.

It’s better at this point in time to check whether our application is working fine or not. So run your application and check:

npm run start:android # or npm run start:ios

We haven’t made any changes in the UI, so the screen will look similar to before if everything worked out.

All the code up to this part is under the v0.1 branch.

The post Create an Offline-first React Native App Using WatermelonDB appeared first on SitePoint.

SitePoint Premium New Releases: Design Systems, SVG & React Native

Sep 6, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Design Systems and Living Styleguides

Create structured, efficient and consistent designs with design systems and styleguides. Explore materials, typography, vertical rhythm, color, icons and more.

➤ Read Design Systems and Living Styleguides.

Build a Real-time Location Tracking App with React Native and PubNub

In this guide, we’re going to use React Native to create real-time location tracking apps. We’ll build two React Native apps — a tracking app and one that’s tracked.

➤ Read Build a Real-time Location Tracking App with React Native and PubNub.

Practical SVG

From software basics to build tools to optimization, you’ll learn techniques for a solid workflow.

Go deeper: create icon systems, explore sizing and animation, and understand when and how to implement fallbacks. Get your images up to speed and look sharp!

➤ Read Practical SVG.

Create an Offline-first React Native App Using WatermelonDB

In this tutorial we’ll create an offline-first movie search application with a title, poster image, genre, and release date. Each movie will have many reviews. We'll use WatermelonDB to provide the database functionality for our app.

➤ Read Create an Offline-first React Native App Using WatermelonDB.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Design Systems, SVG & React Native appeared first on SitePoint.

Build a Real-time Voting App with Pusher, Node and Bootstrap

Sep 4, 2019


In this article, I'll walk you through building a full-stack, real-time Harry Potter house voting web application.

Real-time apps usually use WebSockets, a relatively new type of transfer protocol, as opposed to HTTP, which is a single-way communication that happens only when the user requests it. WebSockets allow for persistent communication between the server and the user, and all those users connected with the application, as long as the connection is kept open.

A real-time web application is one where information is transmitted (almost) instantaneously between users and the server (and, by extension, between users and other users). This is in contrast with traditional web apps where the client has to ask for information from the server. — Quora

Our Harry Potter voting web app will show options (all the four houses) and a chart on the right side that updates itself when a connected user votes.

To give you a brief idea of look and feel, the final application is going to look like this:

Harry Potter with Chart JS

Here's a small preview of how the real-time application works:

To make our application real-time, we’re going to use Pusher and WebSockets. Pusher sits as a real-time layer between your servers and your clients. It maintains persistent connections to the clients — over a WebSocket if possible, and falling back to HTTP-based connectivity — so that, as soon as your servers have new data to push to the clients, they can do so instantly via Pusher.

Building our Application

Let’s create our fresh application using the command npm init. You’ll be interactively asked a few questions on the details of your application. Here's what I had: ➜ Harry-Potter-Pusher $ npm init { "name": "harry-potter-pusher", "version": "1.0.0", "description": "A real-time voting application using Harry Potter's house selection for my article for Pusher.", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "git+" }, "keywords": [ "Harry_Potter", "Pusher", "Voting", "Real_Time", "Web_Application" ], "author": "Praveen Kumar Purushothaman", "license": "ISC", "bugs": { "url": "" }, "homepage": "" } Is this OK? (yes)

So, I left most settings with default values. Now it's time to install dependencies.

Installing Dependencies

We need Express, body-parser, Cross Origin Resource Sharing (CORS), Mongoose and Pusher installed as dependencies. To install everything in a single command, use the following. You can also have a glance of what this command outputs. ➜ Harry-Potter-Pusher $ npm i express body-parser cors pusher mongoose npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN ajv-keywords@3.2.0 requires a peer of ajv@^6.0.0 but none is installed. You must install peer dependencies yourself. + pusher@2.1.2 + body-parser@1.18.3 + mongoose@5.2.6 + cors@2.8.4 + express@4.16.3 added 264 packages in 40.000s Requiring Our Modules

Since this is an Express application, we need to include express() as the first thing. While doing it, we also need some accompanying modules. So, initially, let’s start with this:

const express = require("express"); const path = require("path"); const bodyParser = require("body-parser"); const cors = require("cors"); Creating the Express App

Let’s start with building our Express application now. To start with, we need to get the returned object of the express() function assigned to a new variable app:

const app = express(); Serving Static Assets

Adding the above line after the initial set of includes will initialize our app as an Express application. The next thing we need to do is to set up the static resources. Let’s create a new directory in our current project called public and let’s use Express's static middleware to serve the static files. Inside the directory, let’s create a simple index.html file that says “Hello, World”:

<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>Hello, World</title> </head> <body> Hello, World! </body> </html>

To serve the static files, we have a built-in .use() function with express.static() in Express. The syntax is as follows:

app.use( express.static( path.join(__dirname, "public") ) );

We also need to use the body parser middleware for getting the HTTP POST content as JSON to access within the req.body. We'll also use urlencoded to get the middleware that only parses urlencoded bodies and only looks at requests where the Content-Type header matches the type option. This parser accepts only UTF-8 encoding of the body and supports automatic inflation of gzip and deflate encodings:

app.use( bodyParser.json() ); app.use( bodyParser.urlencoded( { extended: false } ) );

To allow cross-domain requests, we need to enable CORS. Let’s enable the CORS module by using the following code:

app.use( cors() );

Now all the initial configuration has been set. All we need to do now is to set a port and listen to the incoming connections on the specific port:

const port = 3000; app.listen(port, () => { console.log(`Server started on port ${port}.`); });

Make sure your final app.js looks like this:

const express = require("express"); const path = require("path"); const bodyParser = require("body-parser"); const cors = require("cors"); // Create an App. const app = express(); // Serve the static files from public. app.use( express.static( path.join(__dirname, "public") ) ); // Include the body-parser middleware. app.use( bodyParser.json() ); app.use( bodyParser.urlencoded( { extended: false } ) ); // Enable CORS. app.use( cors() ); // Set the port. const port = 3000; // Listen to incoming connections. app.listen(port, () => { console.log(`Server started on port ${port}.`); });

Run the command to start the server:

$ npm run dev

Open your http://localhost:3000/ on a new tab and see the magic. You should be seeing a new page with “Hello, World”.

Preview of Hello World in Browser

The post Build a Real-time Voting App with Pusher, Node and Bootstrap appeared first on SitePoint.

State Management in React Native

Sep 3, 2019


State Management in React Native

Managing state is one of the most difficult concepts to grasp while learning React Native, as there are so many ways to do it. There are countless state management libraries on the npm registry — such as Redux — and there are endless libraries built on top of other state management libraries to simplify the original library itself — like Redux Easy. Every week, a new state management library is introduced in React, but the base concepts of maintaining the application state has remained the same since the introduction of React.

The most common way to set state in React Native is by using React’s setState() method. We also have the Context API to avoid prop drilling and pass the state down many levels without passing it to individual children in the tree.

Recently, Hooks have emerged into React at v16.8.0, which is a new pattern to simplify use of state in React. React Native got it in v0.59.

In this tutorial, we’ll learn about what state actually is, and about the setState() method, the Context API and React Hooks. This is the foundation of setting state in React Native. All the libraries are made on top of the above base concepts. So once you know these concepts, understanding a library or creating your own state management library will be easy.

What Is a State?

Anything that changes over time is known as state. If we had a Counter app, the state would be the counter itself. If we had a to-do app, the list of to-dos would change over time, so this list would be the state. Even an input element is in a sense a state, as it over time as the user types into it.

Intro to setState

Now that we know what state is, let’s understand how React stores it.

Consider a simple counter app:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => {}} title="Increment" /> <Button onPress={() => {}} title="Decrement" /> </> ) } }

In this app, we store our state inside the constructor in an object and assign it to this.state.

Remember, state can only be an object. You can’t directly store a number. That’s why we created a counter variable inside an object.

In the render method, we destructure the counter property from this.state and render it inside an h1. Note that currently it will only show a static value (0).

You can also write your state outside of the constructor as follows:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => {}} title="Increment" /> <Button onPress={() => {}} title="Decrement" /> </> ) } }

Now let’s suppose we want the + and - button to work. We must write some code inside their respective onPress handlers:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => { this.setState({ counter: counter + 1 }) }} title="Increment" /> <Button onPress={() => { this.setState({ counter: counter - 1 }) }} title="Decrement" /> </> ) } }

Now when we click the + and - buttons, React re-renders the component. This is because the setState() method was used.

The setState() method re-renders the part of the tree that has changed. In this case, it re-renders the h1.

So if we click on +, it increments the counter by 1. If we click on -, it decrements the counter by 1.

Remember that you can’t change the state directly by changing this.state; doing this.state = counter + 1 won’t work.

Also, state changes are asynchronous operations, which means if you read this.state immediately after calling this.setState, it won’t reflect recent changes.

This is where we use “function as a callback” syntax for setState(), as follows:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => { this.setState(prevState => ({ counter: prevState.counter + 1 })) }} title="Increment" /> <Button onPress={() => { this.setState(prevState => ({ counter: prevState.counter - 1 })) }} title="Decrement" /> </> ) } }

The “function as a callback” syntax provides the recent state — in this case prevState — as a parameter to setState() method.

This way we get the recent changes to state.

What are Hooks?

Hooks are a new addition to React v16.8. Earlier, you could only use state by making a class component. You couldn’t use state in a functional component itself.

With the addition of Hooks, you can use state in functional component itself.

Let’s convert our above Counter class component to a Counter functional component and use React Hooks:

import React from 'react' import { Text, Button } from 'react-native' const Counter = () => { const [ counter, setCounter ] = React.useState(0) return ( <> <Text>{counter}</Text> <Button onPress={() => { setCounter(counter + 1 ) }} title="Increment" /> <Button onPress={() => { setCounter(counter - 1 ) }} title="Decrement" /> </> ) }

Notice that we’ve reduced our Class component from 18 to just 12 lines of code. Also, the code is much easier to read.

Let’s review the above code. Firstly, we use React’s built-in useState method. useState can be of any type — like a number, a string, an array, a boolean, an object, or any type of data — unlike setState(), which can only have an object.

In our counter example, it takes a number and returns an array with two values.

The first value in the array is the current state value. So counter is 0 currently.

The second value in the array is the function that lets you update the state value.

In our onPress, we can then update counter using setCounter directly.

Thus our increment function becomes setCounter(counter + 1 ) and our decrement function becomes setCounter(counter - 1).

React has many built-in Hooks, like useState, useEffect, useContext, useReducer, useCallback, useMemo, useRef, useImperativeHandle, useLayoutEffect and useDebugValue — which you can find more info about in the React Hooks docs.

Additionally, we can build our own Custom Hooks.

There are two rules to follow when building or using Hooks:

Only Call Hooks at the Top Level. Don’t call Hooks inside loops, conditions, or nested functions. Instead, always use Hooks at the top level of your React function. By following this rule, you ensure that Hooks are called in the same order each time a component renders. That’s what allows React to correctly preserve the state of Hooks between multiple useState and useEffect calls.

Only Call Hooks from React Functions. Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React functional components or call Hooks from custom Hooks.

By following this rule, you ensure that all stateful logic in a component is clearly visible from its source code.

Hooks are really simple to understand, and they’re helpful when adding state to a functional component.

The post State Management in React Native appeared first on SitePoint.

How to Redesign Unsplash Using Styled Components

Aug 29, 2019


Redesigning Unsplash Using Styled Components

Writing future-proof CSS is hard. Conflicting classnames, specificity issues, and so on, come up when you have to write and maintain thousands of lines of CSS. To get rid of the aforementioned issues, Styled Components was created.

Styled Components makes it easy to write your CSS in JS and makes sure there are no conflicting classnames or specificity issues with multiple other benefits. This makes writing CSS a joy.

In this tutorial, we’ll explore what CSS in JS is, the pros and cons of styled-components, and finally, we’ll redesign Unsplash using Styled Components. After completing this tutorial, you should be able to quickly get up and running with Styled Components.

Note: Styled Components was specifically built with React in mind, so you have to be using React to use Styled Components.


For this tutorial, you need a basic knowledge of React.

Throughout the course of this tutorial we’ll be using yarn. If you don’t have yarn already installed, then install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 12.6.0 npx 6.4.1 yarn 1.17.3 Evolution of CSS

Before CSS-in-JS was created, the most common way to style web apps was to write CSS in a separate file and link it from the HTML.

But this caused trouble in big teams. Everyone has their own way of writing CSS. This caused specificity issues and led to everyone using !important.

Then came Sass. Sass is an extension of CSS that allows us to use things like variables, nested rules, inline imports and more. It also helps to keep things organized and allows us to create stylesheets faster.

Even though Sass might be thought of as an improvement over CSS, it arguably causes more harm than good without certain systems put in place.

Later, BEM came in. BEM is a methodology that lets us reduce specificity issues by making us write unique classnames. BEM does solve the specificity problem, but it makes the HTML more verbose. Classnames can become unnecessarily long, and it's hard to come up with unique classnames when you have a huge web app.

After that, CSS Modules were born. CSS Modules solved what neither Sass nor BEM could — the problem of unique classnames — by tooling rather than relying on the name given by a developer, which in turn solved specificity issues. CSS Modules gained a huge popularity in the React ecosystem, paving the way for projects like glamor.

The only problem with all these new solutions was that developers were made to learn new syntaxes. What if we could write CSS exactly how we write it in a .css file but in JS? And thus styled-components came into existence.

Styled Components uses Template Literals, an ES6 feature. Template literals are string literals allowing embedded expressions. They allow for multi-line strings and string interpolation features with them.

The main selling point of Styled Components is that it allows us to write exact CSS in JS.

Styled Components has a lot of benefits. Some of the pros and cons of Styled Components are listed below.


There are lots of advantages to using Styled Components.

Injecting Critical CSS into the DOM

Styled Components only injects critical CSS on the page. This means users only download CSS needed for that particular page and nothing else. This loads the web page faster.

Smaller CSS bundle per page

As it only injects styles that are used in the components on the page, bundle size is considerably smaller. You only load the CSS you need, instead of excessive stylesheets, normalizers, responsiveness, etc.

Automatic Vendor Prefixing

Styled Components allows you to write your CSS and it automatically vendor prefixes according to the latest standard.

Remove unused CSS

With Styled Components, it's easier to remove unused CSS or dead code, as the styles are colocated with the component. This also impacts on reducing bundle size.

Theming is easy

Styled Components makes it really easy to theme a React applications. You can even have multiple themes in your applications and yet easily maintain them.

Reduces the number of HTTP requests

Since there are no CSS files for resets, normalizers, and responsiveness, the number of HTTP requests are considerably reduced.

Unique Classnames

Styled Components generates unique classnames every time a build step takes place. This allows avoiding naming collisions or specificity issues. No more having global conflicts and being forced to resolve them with !important tags.

Maintenance is easy

Styled Components allows you to colocate styles with the component. This allows for painless maintenance. You know exactly which style is affecting your component, unlike in a big CSS file.


Of course, nothing's perfect. Let's look at some downsides associated with Styled Components.

Unable to Cache Stylesheets

Generally, a web browser caches .css files when a user visits a website for the next visit, so it doesn't have to download the same .css file again. But with styled-components, the styles are loaded in the DOM using the <style> tag. Thus, they can’t be cached and every time user has to request styles when they visit your website.

React specific

Styled Components was made with React in mind. Thus, it’s React specific. If you use any other framework, then you can’t use Styled Components.

However, there’s an alternative very similar to styled-components known as emotion which is framework agnostic.

The post How to Redesign Unsplash Using Styled Components appeared first on SitePoint.

4 Key Principles to Remember When Building B2B Ecommerce Websites

Aug 26, 2019


4 Key Principles to Remember When Building B2B Ecommerce Websites

This article was created in partnership with StudioWorks. Thank you for supporting the partners who make SitePoint possible.

B2B ecommerce businesses are currently facing a bit of a boom. Forrester estimates that B2B ecommerce revenues will reach $1.8 trillion in the US in the next four years. And a recent BigCommerce study found that 41% of B2B retailers predict their online sales to increase more than 25% by the end of the year.

So if you’re building a B2B ecommerce storefront to capitalize on this boom, it’s important that you take the time to ensure that the website has all the right functionality to receive and fulfill orders, and to deliver a great shopping experience to your buyers.

In this post, we’ll take a look at some of the key principles you’ll need to keep in mind when tackling a B2B ecommerce website build.

But before we begin, let’s put everything into a bit of context.

Key Differences Between B2C and B2B Ecommerce Sites

B2B ecommerce companies, of course, provide the goods and services that other companies need to operate and grow. In the ecommerce space, when we refer to a B2B company, we’re generally talking about firms that sell physical goods on a wholesale basis, but other types of B2B companies have been known to get into the ecommerce game.

For example, industrial suppliers or consultancy service providers are generally B2B companies, and they may or may not offer online purchasing options too. B2C companies, on the other hand, sell their products and services direct to individual customers.

b2c companiesImage source

Currently, the B2B ecommerce opportunity is huge compared to B2C ecommerce, which has become harder to crack due to high levels of competition and low barriers to entry. B2B buyers are becoming increasingly interested in making purchases online. Sellers, meanwhile, are only starting to make it possible.

But just because the demand is there doesn’t mean corporate buyers are expecting the same type of experiences from B2B ecommerce that they get on Amazon. Here are a few key differences between B2B and B2C, when it comes to ecommerce interfaces and customer experiences.

Breadth of audience

One major difference between B2B and B2C is the scale of their target audience. B2B sites deal with buyers who have simple, targeted profiles such as CTOs at tech startups. On the flip side, B2C sites have a broader group of people to cater to — for instance, moms with toddlers or millennials who are into sneakers.

For this reason, B2B ecommerce sites typically have a different purchasing user flow which involves more personalization.

Average price point

Most B2C ecommerce sites sell to hundreds of thousands of customers because their products typically sell at a lower price point. On the other hand, B2B sites may have less than 100 customers.

B2B ecommerce sites often use quote builders and set up different technology to be able to accept and process larger orders. For example, this may include options for recurring payments, bulk discounts, and shipping.

The decision-making process

B2C buying decisions are made fairly quickly, as they’re generally less rational and more based on impulse. Lower pricing points make this possible. In B2B decisions, the purchasing manager may have to get approval from senior executives, finance, marketing, and legal departments before placing an order.

To streamline the decision-making process, B2B ecommerce site owners offer tailored pricing to buyers. They also set up customer accounts to make it easy for buyers to fill out orders and complete transactions.

With the above in mind, let’s take a closer look at some of the important principles to guide you as you build your next B2B ecommerce website.

1. Integrate with an ERP Solution

As a B2B company, you’ll be able to significantly increase productivity by integrating an ERP solution with your ecommerce site.

The key benefit is that your inventory levels will automatically update in two places. Inventory availability figures can appear on the front end of the site as goods are added to inventory, giving customers a better shopping experience. Plus, with access to ERP data on the back end, you can enable your staff to easily meet orders and forecast product demand.

Another key benefit of integrating an ERP solution is that you won’t need to hire additional workers in case product demand goes up.

Here are some of the most common ERP integration patterns:

Migration. Data migration ERP refers to the movement of a particular set of data between two systems at a specific point in time. The migration can either be on an as-needed basis through an API or on command by setting the configuration parameters to pass into the API calls. Broadcast. The broadcast ERP integration pattern involves the transfer of data from one source system to multiple destination systems in real time. Broadcast systems help move data quickly between systems and keep multiple systems up to date across time. Aggregation. This ERP pattern receives data from multiple systems and stores it into only one system. It eliminates the need to regularly run multiple migrations, which removes the risk associated with data synchronization and accuracy. Bi-directional synchronization. Bi-directional sync ERP integration is useful in situations where different systems are required to perform different functions in the same data set. Correlation. Correlation is similar to bi-directional ERP integration. The difference is that the former synchronizes objects only if they’re present in both systems. Process APIImage source

BigCommerce offers a number of ERP integrations, including Brightpearl, Stitch Labs, NetSuite ERP Connector by Patchworks, and Acumatica Cloud ERP by Kensium via the eBridge Connections systems integrator.

The post 4 Key Principles to Remember When Building B2B Ecommerce Websites appeared first on SitePoint.

25+ JavaScript Shorthand Coding Techniques

Aug 26, 2019


Child between piles of books

This really is a must read for any JavaScript developer. I have written this guide to shorthand JavaScript coding techniques that I have picked up over the years. To help you understand what is going on, I have included the longhand versions to give some coding perspective.

August 26th, 2019: This article was updated to add new shorthand tips based on the latest specifications. If you want to learn more about ES6 and beyond, sign up for SitePoint Premium and check out our extensive library of modern JavaScript resources.

1. The Ternary Operator

This is a great code saver when you want to write an if..else statement in just one line.


const x = 20; let answer; if (x > 10) { answer = "greater than 10"; } else { answer = "less than 10"; }


const answer = x > 10 ? "greater than 10" : "less than 10";

You can also nest your if statement like this:

const answer = x > 10 ? "greater than 10" : x < 5 ? "less than 5" : "between 5 and 10"; 2. Short-circuit Evaluation Shorthand

When assigning a variable value to another variable, you may want to ensure that the source variable is not null, undefined, or empty. You can either write a long if statement with multiple conditionals, or use a short-circuit evaluation.


if (variable1 !== null || variable1 !== undefined || variable1 !== '') { let variable2 = variable1; }


const variable2 = variable1 || 'new';

Don’t believe me? Test it yourself (paste the following code in es6console):

let variable1; let variable2 = variable1 || 'bar'; console.log(variable2 === 'bar'); // prints true variable1 = 'foo'; variable2 = variable1 || 'bar'; console.log(variable2); // prints foo

Do note that if you set variable1 to false or 0, the value bar will be assigned.

3. Declaring Variables Shorthand

It's good practice to declare your variable assignments at the beginning of your functions. This shorthand method can save you lots of time and space when declaring multiple variables at the same time.


let x; let y; let z = 3;


let x, y, z=3; 4. If Presence Shorthand

This might be trivial, but worth a mention. When doing “if checks”, assignment operators can sometimes be omitted.


if (likeJavaScript === true)


if (likeJavaScript)

Note: these two examples are not exactly equal, as the shorthand check will pass as long as likeJavaScript is a truthy value.

Here is another example. If a is NOT equal to true, then do something.


let a; if ( a !== true ) { // do something... }


let a; if ( !a ) { // do something... } 5. JavaScript For Loop Shorthand

This little tip is really useful if you want plain JavaScript and don't want to rely on external libraries such as jQuery or lodash.


const fruits = ['mango', 'peach', 'banana']; for (let i = 0; i < fruits.length; i++)


for (let fruit of fruits)

If you just wanted to access the index, do:

for (let index in fruits)

This also works if you want to access keys in a literal object:

const obj = {continent: 'Africa', country: 'Kenya', city: 'Nairobi'} for (let key in obj) console.log(key) // output: continent, country, city

Shorthand for Array.forEach:

function logArrayElements(element, index, array) { console.log("a[" + index + "] = " + element); } [2, 5, 9].forEach(logArrayElements); // a[0] = 2 // a[1] = 5 // a[2] = 9 6. Short-circuit Evaluation

Instead of writing six lines of code to assign a default value if the intended parameter is null or undefined, we can simply use a short-circuit logical operator and accomplish the same thing with just one line of code.


let dbHost; if (process.env.DB_HOST) { dbHost = process.env.DB_HOST; } else { dbHost = 'localhost'; }


const dbHost = process.env.DB_HOST || 'localhost'; 7. Decimal Base Exponents

You may have seen this one around. It’s essentially a fancy way to write numbers without the trailing zeros. For example, 1e7 essentially means 1 followed by 7 zeros. It represents a decimal base (which JavaScript interprets as a float type) equal to 10,000,000.


for (let i = 0; i < 10000; i++) {}


for (let i = 0; i < 1e7; i++) {} // All the below will evaluate to true 1e0 === 1; 1e1 === 10; 1e2 === 100; 1e3 === 1000; 1e4 === 10000; 1e5 === 100000; 8. Object Property Shorthand

Defining object literals in JavaScript makes life much easier. ES6 provides an even easier way of assigning properties to objects. If the variable name is the same as the object key, you can take advantage of the shorthand notation.

The post 25+ JavaScript Shorthand Coding Techniques appeared first on SitePoint.

SitePoint Premium New Releases: Form Design + Cloning Tinder

Aug 23, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Form Design Patterns

On first glance, forms are simple to learn. But when we consider the journeys we need to design, the users we need to design for, the browsers and devices being used; and ensuring that the result is simple and inclusive, form design becomes a far more interesting and bigger challenge.

➤ Read Form Design Patterns.

Cloning Tinder Using React Native Elements and Expo

In this tutorial, we’ll be cloning the most famous dating app, Tinder. We’ll then learn about a UI framework called React Native Elements, which makes styling React Native apps easy. Since this is just going to be a layout tutorial, we’ll be using Expo, as it makes setting things up easy.

➤ Read Cloning Tinder Using React Native Elements and Expo.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Form Design + Cloning Tinder appeared first on SitePoint.

How to Use Windows Subsystem for Linux 2 and Windows Terminal

Aug 22, 2019


Using Windows Subsystem for Linux 2 and Windows Terminal

In this article, you’ll learn how you can set up and run a local Linux shell interface in Windows without using a virtual machine. This not like using terminals such as Git Bash or cmder that have a subset of UNIX tools added to $PATH. This is actually like running a full Linux kernel on Windows that can execute native Linux applications. That's pretty awesome, isn't it?

If you’re an experienced developer, you already know that Linux is the best platform on which to build and run server-based solutions using open-source technologies. While it’s possible to run the same on Windows, the experience is not as great. The majority of cloud hosting companies offer Linux to clients to run their server solutions in a stable environment. To ensure software works flawlessly on the server machine just like on the local development machine, you need to run identical platforms. Otherwise, you may run into configuration issues.

When working with open-source technologies to build a project, you may encounter a dependency that runs great on Linux but isn’t fully supported on Windows. As a result, Windows will be required to perform one of the following tasks in order to contribute to the project:

Dual Boot Windows and Linux (switch to Linux to contribute code) Run a Linux virtual machine using a platform such as Vagrant, VirtualBox, VMWare etc. Run the project application inside a Docker container

All the above solutions require several minutes from launch to have a full Linux interface running. With the new Windows Subsystem for Linux version 2 (WSL2), it takes a second or less to access the full Linux shell. This means you can now work on Linux-based projects inside Windows with speed. Let's look into how we can set up one in a local machine.

Installing Ubuntu in Windows

First, you'll need to be running the latest version of Windows. In my case, it's build 1903. Once you've confirmed this, you'll need to activate the Windows Subsystem for Linux feature. Simply go to Control-Panel -> Programs -> Turn Windows feature on or off. Look for "Windows Subsystem for Linux" and mark the checkbox. Give Windows a minute or two to activate the feature. Once it's done, click the restart machine button that appears next.

Enabling the WSL feature

Next, go to the Windows Store and install Ubuntu. The first Ubuntu option will install the latest versions. Other Ubuntu options allow you to install an older supported version.

Microsoft Store Linux

Once the installation is complete, you'll need to launch it from the menu. Since this is the first time, you’ll need to wait for the Ubuntu image to be downloaded and installed on your machine. This is a one-time step. The next time you launch, you’ll access the Linux Shell right away.

Once the image installation is complete, you’ll be prompted to create a new root user account inside this shell:

Installing Ubuntu in the command line

After you’ve created your credentials, feel free to type any Linux command to confirm you’re truly accessing a native Linux shell:

Ubuntu usage commands

You’ll be pleased to note that git, python3, ssh, vim, nano, curl, wget and many other popular tools are available out of the box. In a later section, we'll use sudo apt-get command to install more frameworks. First, let's look at several ways we can access this new Linux shell terminal interface. It's probably a good idea to upgrade currently installed packages:

$ sudo apt-get update && sudo ap-get upgrade Accessing Linux Shell Interface

The are several interesting ways of accessing the Linux shell interface.

Go to Windows Menu Start > type "Ubuntu". You can pin it to Start for quicker access

Open Command Prompt or Windows PowerShell and execute the command bash

In Windows explorer, SHIFT + right-mouse click a folder to open a special context menu. Click Open Linux shell here.

In Windows explorer, navigate to any folder you desire, then in the address bar type wsl, then press enter.

In Visual Studio Code, change the default terminal to wsl.

VS Code WSL Terminal

If you come across new ways, please let me know. Let's set up Node.js in the following section.

The post How to Use Windows Subsystem for Linux 2 and Windows Terminal appeared first on SitePoint.

These Are the Best Developer Tools & Services

Aug 22, 2019


This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

As you've learned through experience, there's much involved in trying to find the right developers' tools or services for the task at hand.

It's a challenge. More and more software products and services are appearing on the market. But, every year it doesn't get any easier. This can be especially true in some cases. One case is where app developers have been trying to bridge the gap between software development and operations.

As you will see, open-source solutions go a long way toward resolving some of these problems. There are services that developers can use and that way can save them both time and money.

That's the case with the 6 products and services described below.

The post These Are the Best Developer Tools & Services appeared first on SitePoint.

Getting Started with React Native

Aug 21, 2019


With the ever-increasing popularity of smartphones, developers are looking into solutions for building mobile applications. For developers with a web background, frameworks such as Cordova and Ionic, React Native, NativeScript, and Flutter allow us to create mobile apps with languages we’re already familiar with: HTML, XML, CSS, and JavaScript.

In this guide, we’ll take a closer look at React Native. You’ll learn the absolute basics of getting started with it. Specifically, we’ll cover the following:

what is React Native what is Expo how to set up an React Native development environment how to create an app with React Native Prerequisites

This tutorial assumes that you’re coming from a web development background. The minimum requirement for you to be able to confidently follow this tutorial is to know HTML, CSS, and JavaScript. You should also know how to install software on your operating system and work with the command line. We’ll also be using some ES6 syntax, so it would help if you know basic ES6 syntax as well. Knowledge of React is helpful but not required.

What is React Native?

React Native is a framework for building apps that work on both Android and iOS. It allows you to create real native apps using JavaScript and React. This differs from frameworks like Cordova, where you use HTML to build the UI and it will just be displayed within the device’s integrated mobile browser (WebView). React Native has built in components which are compiled to native UI components, while your JavaScript code is executed through a virtual machine. This makes React Native more performant than Cordova.

Another advantage of React Native is its ability to access native device features. There are many plugins which you can use to access native device features, such as the camera and various device sensors. If you’re in need of a platform-specific feature that hasn’t been implemented yet, you can also build your own native modules — although that will require you to have considerable knowledge of the native platform you want to support (Java or Kotlin for Android, and Objective C or Swift for iOS).

If you’re coming here and you’re new to React, you might be wondering what it is. React is a JavaScript library for the Web for building user interfaces. If you’re familiar with MVC, it’s basically the View in MVC. React’s main purpose is to allow developers to build reusable UI components. Examples of these components include buttons, sliders, and cards. React Native took the idea of building reusable UI components and brought it into mobile app development.

What is Expo?

Before coming here, you might have heard of Expo. It’s even recommended in the official React Native docs, so you might be wondering what it is.

In simple terms, Expo allows you to build React Native apps without the initial headache that comes with setting up your development environment. It only requires you to have Node installed on your machine, and the Expo client app on your device or emulator.

But that’s just how Expo is initially sold. In reality, it’s much more than that. Expo is actually a platform that gives you access to tools, libraries and services for building Android and iOS apps faster with React Native. Expo comes with an SDK which includes most of the APIs you can ask for in a mobile app development platform:

Camera ImagePicker Facebook GoogleSignIn Location MapView Permissions Push Notifications Video

Those are just few of the APIs you get access to out of the box if you start building React Native apps with Expo. Of course, these APIs are available to you as well via native modules if you develop your app using the standard React Native setup.

Plain React Native or Expo?

The real question is which one to pick up — React Native or Expo? There’s really no right or wrong answer. It all depends on the context and what your needs are at the moment. But I guess it’s safe to assume that you’re reading this tutorial because you want to quickly get started with React Native. So I’ll go ahead and recommend that you start out with Expo. It’s fast, simple, and easy to set up. You can dive right into tinkering with React Native code and get a feel of what it has to offer in just a couple of hours.

That said, I’ve still included the detailed setup instructions for standard React Native for those who want to do it the standard way. As you begin to grasp the different concepts, and as the need for different native features arises, you’ll actually find that Expo is kind of limiting. Yes, it has a lot of native features available, but not all the native modules that are available to standard React Native projects are supported.

Note: projects like unimodules are beginning to close the gap between standard React Native projects and Expo projects, as it allows developers to create native modules that works for both React Native and ExpoKit.

Setting Up the React Native Development Environment

In this section, we’ll set up the React Native development environment for all three major platforms: Windows, Linux, and macOS. We’ll also cover how to set up the Android and iOS simulators. Lastly, we’ll cover how to set up Expo. If you just want to quickly get started, I recommend that you scroll down to the “Setting up Expo” section.

Here are the general steps for setting up the environment. Be sure to match these general steps to the steps for each platform:

install JDK install Android Studio or Xcode install Watchman update the environment variable install the emulator install Node install React Native CLI

You can skip to the section relevant to your operating system. Some steps — like setting up Android Studio — are basically the same for each operating system, so I’ve put them in their own section:

setting up on Windows setting up on Linux setting up on macOS setting up Android Studio install Node setting up Expo setting up emulators install React Native CLI troubleshooting common errors Setting Up on Windows

This section will show you how to install and configure the software needed to create React Native apps on Windows. Windows 10 was used in testing for this.

Install Chocolatey

Windows doesn’t really come with its own package manager that we can use to install the needed tools. So the first thing we’ll do is install one called Chocolatey. You can install it by executing the following command on the command line or Windows Powershell:

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString(''))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

We can now install the other tools we need by simply using Chocolatey.

Install Python

Python comes with the command line tools required by React Native:

choco install -y python 2 Install JDK

The JDK allows your computer to understand and run Java code. Be sure to install JDK version 8 as that’s the one required by React Native:

choco install jdk8 Install NVM

Node has an installer for Windows. It’s better to use NVM for Windows, as that will enable you to install multiple versions of Node so that you can test new versions, or use a different version depending on the project you’re currently working on. For that, you can use NVM for Windows. Download, extract it and execute nvm-setup.exe to install it.

Install Watchman

Watchman optimizes the compilation time of your React Native app. It’s an optional install if you’re not working on a large project. You can find the install instructions on their website.

Update the Environment Variables

This is the final step in setting up React Native on Windows. This is where we update the environment variables so the operating system is aware of all the tools required by React Native. Follow these steps right before you install the React Native CLI.

Go to Control Panel → System and Security → System. Once there, click the Advanced system settings menu on the left.

Windows advanced system settings

That will open the system properties window. Click on the Environment Variables button:

System properties

Under the User variables section, highlight the Path variable and click the edit button.

On the edit screen, click the New button and enter the path to the Android SDK and platform tools. For me, it’s on C:\users\myUsername\AppData\Local\Android\Sdk and C:\users\myUsername\AppData\Local\Android\Sdk\platform-tools. Note that this is also where you add the path to the JDK if it isn’t already added:

add path

The post Getting Started with React Native appeared first on SitePoint.

How the Top 1% of Candidates Ace Their Job Interviews

Aug 19, 2019


You've done it.
You've made it through the initial screening process. You've just earned an interview with one of the most prestigious and successful companies in your industry. As you're waiting in the office with three other candidates, a fourth candidate walks in.
He has an interview scheduled, the same as you.
There's something odd about this interviewee. He already knows everyone there. He's on a first-name basis with the receptionist. Everyone likes him and thinks highly of him. Instead of waiting in the lobby with the rest of you, he's immediately ushered into one of the offices.
Who is this guy?

This is an everyday reality for elite job candidates

How is this possible?
Candidates like these are pretty uncommon. Not because they're so special, but because of their decision-making process. What makes their decision-making process different?

They win coveted jobs and promotions in the face of intense competition They ask for and receive substantially higher salaries than their coworkers Employers create positions specifically for them (to keep them) They earn positions before they're publicly available These job candidates seem to receive preferential treatment wherever they go
Something is going on, but what?
These elite candidates have a very different set of attitudes, behaviors, and habits than most other employees. Is it simply because they're better than everyone else?
Not at all.

The post How the Top 1% of Candidates Ace Their Job Interviews appeared first on SitePoint.

9 Key Ways to Turbocharge Your Design Career

Aug 19, 2019


9 Key Ways to Turbocharge Your Design Career

This article was created in partnership with StudioWorks. Thank you for supporting the partners who make SitePoint possible.

Sure, you need a certain minimal viable level of design skill prowess if you want to have a successful career as a designer. But a lot more than that goes into it, too. Think about how many people you know who can cook amazing food but who would never last five minutes in a restaurant kitchen during the lunch rush.

It would be great if we could just sit down, design pretty things, and go home. Or better yet, just chill in our home studios, creating. Unfortunately or not, design is a business just like everything else, and that means you’re going to have to put time, effort, and sometimes money into cultivating the soft skills and business side of your design career.

This means managing your time well, marketing yourself, building a brand, experimenting, maybe launching a side business, and generally just putting your name and work out there for people to find.

These days, it’s not enough to have a portfolio. That’s just table stakes. You need to plan out your whole career — with the understanding that plans change.

Let’s look at some of the things you need to do to develop your career until you’re basically the next Jen Simmons or Jeffrey Zeldman.

Get Your Communication Skills Flowing

Communication skills come naturally to some, and not so naturally to others. In both cases, those skills are rather drastically affected by the people you have to communicate with most. Most of us find ways to convey our thoughts and intentions clearly to our friends, and also to people in our industry and hobby communities. We learn the lingo, we learn which topics encourage discussion, and which are best avoided.

Writing for anyone who’s not a part of your immediate community, and especially writing for people who don’t know what you know, is hard. Speaking to them in person can be harder, depending on how you, as a person, prefer to communicate. But, all the same, you have to.

Even if you work in an agency amongst other designers right now, there will inevitably come a time when you have to pitch clients on the benefits of your work, explain to a newbie the processes you use, or defend your decisions to developers who push back, or to other people who just don’t know what you know.

If there’s any single thing you take away from this article, focus on your communication skills. It will affect your career more than anything else on this list. If you’re looking for a place to start learning those skills, CopyBlogger always has you covered — at least for the writing side of it.

Branch Out into Side Businesses

Some side projects are great, strictly because they allow us to get out of our comfort zones, try new things and regain a sense of creative discovery.

Others may overlap with the activities you’d use to build a personal brand, which we’ll get into shortly, with the added benefit that they can bring in extra money while you’re establishing yourself as an expert in the field.

Here are some of the more popular ways of doing this.

1. Courses

Sure, you can throw some tutorials onto your blog, or onto YouTube, for free. And you probably should. But if you want to make a side business out of teaching others what you do, and further your career in the process, you’re going to need an actual product. This is where courses come in.

Quality video courses, which are quite popular these days, can be expensive and time-consuming to set up. It’s gotten easier, though, now that you can use all-in-one course development and delivery services like Kajabi. This platform can help you manage everything relating to your premium educational content and running the business around it.

You can create membership sites, host live events, create automation funnels, upsells, maintain a blog and manage contacts all in one place, so it’s not as hard as it used to be. However, you still have to get a half-decent camera, a half-decent microphone, and ideally learn some basic video editing skills.

This is a side hustle I’d frankly only recommend if you’ve got some time on your hands, and a bit of extra money for some beginner hardware. It can be quite rewarding, though, so don’t dismiss the idea out of hand.

2. Live Streaming

I mentioned live events in the last section, so I thought I’d mention streaming as its own thing. Streaming doesn’t have to be educational, although education is probably the best way to sell your expertise. You could just sit there and share designer memes on Twitch if you want.

The problem is mostly that the requirements for video and audio haven’t changed, and depending on how you set up your stream schedule, it can be even more demanding than making video courses.

Then again, if you don’t mind not making a lot of money, and want to do it for fun, it’s still a great way to “meet” new people, and to be seen.

3. Paid Newsletters

Now this is an option I’d save for when you’ve already built a bit of an audience by other means, such as social media and/or blogging. But Substack has made it easier than ever for people to pay writers directly.

If you’ve got wisdom to share, and if you think people would be willing to pay to have that wisdom beamed straight into their inboxes, go on and have at it.

4. Make Stuff for Other Designers

Plenty of designers and agencies have kept up a healthy “passive” revenue stream by making resources for other designers.

Be it a template, a WordPress theme, a Sketch UI kit, an icon font, or whatever else, if it’s valuable to you because it solves problems that you have, then there’s a good chance your peers will be willing to pay for it. Just don’t forget to also give stuff away once in a while. Gratitude goes a long way in the design world.

The post 9 Key Ways to Turbocharge Your Design Career appeared first on SitePoint.

SitePoint Premium New Releases: Going Offline + React Native

Aug 16, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Going Offline

Jeremy Keith introduces you to service workers (and the code behind them) to show you the latest strategies in offline pages. Learn the ins and outs of fetching and caching, enhance your website’s performance, and create an ideal offline experience for every user, no matter their connection.

➤ Read Going Offline.

Integrating AdMob in React Native and Expo

Google AdMob is one way to install ads into any mobile application in order to monetize it. Installing and configuring AdMob in bare React Native can be a cumbersome process. But it’s relatively simple to install when using a toolchain like Expo — we'll show you how.

➤ Read Integrating AdMob in React Native and Expo.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Going Offline + React Native appeared first on SitePoint.

How to Build a Cipher Machine with JavaScript

Aug 15, 2019


I was overjoyed recently when I read the news that the British mathematician, Alan Turing will feature on the Bank of England's new £50 note. Turing occupies a special place in the hearts of computer nerds for effectively writing the blueprints for the computer. He also helped to break the notoriously difficult naval Enigma code used by the Nazi U-boats in World War II. In honor of this I decided a quick tutorial to build a cipher machine using the JavaScript skills that are covered in my book JavaScript Novice To Ninja.

The cipher we'll be using is the Caesar cipher, named after the Roman emperor, Julius Caesar. It is one of the most simple ciphers there are and simply shifts each letter along a set number of places. For example, the phrase 'Hello World' would become 'KHOOR ZRUOG' using a shift of 3 (which it is the shift that Julius Caesar is thought to have used).

Our cipher machine

You can see an example of the finished code here. Have a play around at writing some secret messages to get a feel for how it works.

To get started, fire up your favorite text editor and save the following as caesar.html:

The post How to Build a Cipher Machine with JavaScript appeared first on SitePoint.

10 Tools to Help You Manage Your Agile Workflows

Aug 15, 2019



This article was created in partnership with Thank you for supporting the partners who make SitePoint possible.

Software development remains a complex task which balances analysis, planning, budget constraints, coding, testing, deployment, issue fixing, and evaluation. Large projects often fail because no one can comprehend the full extent of requirements from the start. Those requirements then change with each revision of the product.

An agile development approach can mitigate the risks. There are many flavors of 'agile', but most rapidly evolve a product over time. Self-organising teams of stakeholders, designers, developers, and testers collaborate to produce a minimum viable product which is extended and revised during a series of iterations - or sprints.

Ideally, a fully-working product is available at the end of every sprint. Changing requirements can determine the priorities for the next sprint.

Crucial Collaboration

Communication distinguishes agile from more traditional waterfall workflows. Teams work together on a particular feature so developers and designers can quickly provide feedback when a requirement becomes impractical or more cost-effective options can be identified.

A variety of tools and software is available to help teams collaborate. There are two general options:

Separate tools for specific tasks. For example, a feature may be described in a document which is transferred to a to-do list which becomes a pull request and inevitably has bugs reported. All-in-one tools which manage the whole process.

The following tools can all help manage your agile workflow. has rapidly become the full agile management solution for 80,000 organizations within a few years. dashboard offers a completely customizable application for numerous use-cases such as agile project management. Powerful features include:

quick-start project templates (there are over 100 template that are completely customisable to fit your needs) attractive at-a-glance project state dashboards, so you can easily track progress and identify bottlenecks in a "big picture" view intuitive collaboration with team members and clients using @mentions easy file sharing, so you'll always know where your most updated files are multiple views to track progress (reports, Kanban boards, Gantt charts, calendars, timelines etc.) task management, time and deadline tracking automations and integration with other applications to keep everything in one place, so you can focus on the important stuff.

Prices start from $25 per month for five users, but a 30 day free trial is available so you can assess the system.

The post 10 Tools to Help You Manage Your Agile Workflows appeared first on SitePoint.

What Every Dev Company Needs to Know about NoOps Development

Aug 13, 2019


It seems like everything is getting automated these days.

And I mean everything.

Who would’ve thought that we’d be automating development teams, though?

69% of development companies agree that process and automation improvement is a top priority, so it makes sense we’re heading in this direction.

This rise of automation has formed a new development model known as NoOps, which stands for no operations.

The name means that this approach involves no operations input, cutting out the "operate" step of the continuous development model.

Continuous development model

That’s right. The developers are capable of launching, testing, and fixing apps on the fly without any interruptions or downtime.

Follow along as I further cover what NoOps is, the benefits of using it, and how to implement it.

What is NoOps?

NoOps is a new development approach that involves relieving developers of needing to constantly work with operations members, speeding up deployment time, testing, and workflow.

It stems from the previously popular model of developers and operations teams working closely called DevOps.

Instead of working together, service providers give development teams the proper cloud infrastructure, patching, backups, and resources to work on their own.

Traditional DevOps vs NoOps

That means programmers no longer require feedback and approval during development, and can operate completely independently.

This also allows the operations department of a company to focus on what they do best: project management, talent acquisition, and so on.

However, NoOps is typically most beneficial for startups that begin with this continuous development model. It is much more difficult to switch to NoOps when you have existing environments, pipelines, and deployment procedures.

As an example, NoOps does not work well for enterprises that are still stuck with a monolithic legacy application. This would require a re-write of most of their codebase to make it fit with the NoOps ideology.

Additionally, if a company adopts NoOps later in the business cycle, they may have to shrink the size of their operations team.

If a startup launches with a NoOps model from the beginning, they have the potential of staying lean for longer. The saved resources can be put towards other aspects of business growth, like marketing.

What Are the Benefits of NoOps?

There are many benefits to be gained by adopting a NoOps model. The first of which is that it maximizes development time.

The post What Every Dev Company Needs to Know about NoOps Development appeared first on SitePoint.

A Guide to Visual Testing with Percy

Aug 13, 2019


A Guide to Visual Testing with Percy

This article was created in partnership with Percy. Thank you for supporting the partners who make SitePoint possible.

Visual testing is the automated process of ensuring your user interface looks correct in different browsers and at different screen widths.

Most development teams rely solely on unit and integration tests. While this practice helps ensure application logic is working correctly, it fails to detect visual defects at the UI level. Implementing this type of test allows visual problems to be detected early and to get fixed before the product is released.

In this tutorial, you’ll learn how to set up and run visual testing for your project using Percy. For demonstration purposes, we’ll be using a single-page application that’s API-driven using real-world data. You’ll learn how to visually test UIs that output dynamic data, and about Percy’s visual review and approval workflow.


Visual testing is a topic for intermediate and advanced users. To follow this tutorial, you’ll need to be comfortable writing code in JavaScript ES6+ syntax. We won’t be doing actual app development, but you should at least have some experience using the following libraries in case you want to tweak something in the demo project we’ll be using:

Express.js + RESTful APIs jQuery Axios CSS frameworks

You should also be familiar with Git branching and different types of branching strategies. Familiarity with any testing framework will also help you easily understand the concepts discussed in this article. You’ll need to have a GitHub account before you can proceed with this tutorial. We’ll use this demo project as our starting point.

About Percy

Percy provides developers with a platform and workflow to run visual testing and reviews on web apps, static sites, or component libraries. There’s a free plan that supports unlimited team members, 5,000 snapshots per month (with a one-month history), and unlimited projects.

To get started with Percy, install one of its SDKs into the project you want to visually test. It’s the same as installing a testing framework like Mocha or Jest. Next, you write a script and run it just as you would with any type of test.

However, in Percy’s case, DOM snapshots of your web application are captured and uploaded for rendering on Percy’s infrastructure. Percy then detects and highlights visual differences between new and previous snapshots, also known as the baselines. The results are displayed in Percy, where you can review and determine whether the UI looks correct or needs to be fixed.

Percy renders each snapshot in Chrome and Firefox and can render at up to ten different screen resolutions. That’s quite impressive, as doing this manually is tiresome. I encourage you to read through the following docs to gain a deeper understanding:

The post A Guide to Visual Testing with Percy appeared first on SitePoint.

How to Use Bannersnack to Generate Amazing Banners in Seconds

Aug 12, 2019


How to Use Bannersnack to Generate Amazing Banners in Seconds

This article was created in partnership with StudioWorks. Thank you for supporting the partners who make SitePoint possible.

Banner ads have been around since the dawn of the Internet. And badly designed banner ads that annoy many users have been around just a long. But 30 years later businesses still pay to put them on websites and in their ad rotation, so they must work when done right. Right?

Banner ads, especially animated ones, are being used more than ever on social media profiles and as social ads too, as marketers realize the power of video in catching attention. Today’s banner ads are sophisticated and well designed, and can be highly effective. But for designers, they present a huge challenge.

Coding Banner Ads Is Hard and Expensive

Banner ads, especially animated ones, are notoriously hard to code in HTML5. Sure, you can create animated banners in Flash, but Apple’s iOS doesn’t support them, and Adobe says they’re shutting down Flash altogether in 2020.

GIFs are another possible solution, but they usually have poor resolution and most web users associate GIFs with memes and humor, rather than serious products. So you’re left with HTML5.

Coding HTML5 ads — in particular, animated ones — requires expertise, and expertise is expensive. Good coders know their value, and finding one you can trust can be an impossible task. Freelance platforms might seem the obvious place to start, but coders worth their salt soon move away from third-party sites.

Fortunately, this is the age of SaaS, and designers can now use applications like Bannersnack with drag-and-drop interfaces that lets you design banner ads quickly and efficiently.

What Does Bannersnack Do?

Bannersnack is an online app that helps you design fully responsive banner ads for websites and social media platforms — without having any coding or design skills. The Bannersnack creators figured out all the hard coding stuff, so you just log in, choose your size, add your image, select colors and fonts, and get creative.

Bannersnack home pageImage source: Bannersnack

Let’s take a quick run through the Bannersnack platform and see what it can do.

1. Pick your size

Start with a custom size and orientation (vertical, horizontal, and square), or choose a pre-set size. You can even pick a Facebook ad or an Instagram post:

Picking a sizeImage source: Bannersnack 2. Design your ad

You can either design from scratch, or use an existing Bannersnack template. Bannersnack offers static or animated designs in its template gallery, so you can get started even without having a design idea.

Designing the adImage source: Bannersnack 3. Add images and text

Customize your design with just a few clicks. If you can use Mailchimp or other SaaS marketing tools, you can use Bannersnack. Edit headlines, text, buttons, background, add your own logos and images, and change design nuances like line heights and transparencies. You can create beautifully designed banner ads that are a perfect match for your existing brand standards and colors.

Adding images and textImage source: Bannersnack 4. Animate with ease

It’s no secret that animated ads are more engaging and drive action better than static ads. Readers are more likely to react to videos than non-animated content, and more likely to visit the publisher’s page or website.

Videos drive action

Videos drive actionImage source:

But animation is notoriously tricky. Bannersnack has managed to solve that hurdle — with HTML5 animations that include fade-in, slide-in, and bounces. These aren’t full video ads, but they’ll still pull a user’s eyes towards them on a busy web page and boost your engagement. They have the added advantage of loading quickly — which is vital for preventing mobile users from scrolling past your ad before it’s even loaded.

Bannersnack’s HTML5 editor has an intuitive and user-friendly interface and slide management system to make animation simple. Try one of the 32 pre-made animation presets to animate any part of your ad:

Pre-made animation presetsScreenshot source

Or, create custom animations and adjust things like duration, delay, and transitions.

Creating custom animations

The post How to Use Bannersnack to Generate Amazing Banners in Seconds appeared first on SitePoint.

SitePoint Premium New Releases: Responsive CSS + React Native

Aug 9, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

14 Essential Responsive CSS Techniques

Over 6 easy chapters, we’ll help you to get a grounding in responsive CSS techniques, helping you make your sites and apps look great on any device.

Starting with an examination of the em responsive unit and highlight its shortcomings, then move on to the rem unit and how it can overcome them.

Finally, we’ll look at how media queries can work with em and rem to provide a complete responsive web design solution.

➤ Read 14 Essential Responsive CSS Techniques.

Using Android Native Modules in React Native

In this tutorial, we’ll develop a simple application that shows the current battery percentage and charging status. For this, we’ll create a native module with methods to fetch the required information.

➤ Read Using Android Native Modules in React Native.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Responsive CSS + React Native appeared first on SitePoint.

How to Create Web Animations with Anime.js

Aug 8, 2019


There are many JavaScript animation libraries out there, but Anime.js is one of the best. It's easy to use, has a small and simple API, and offers everything you could want from a modern animation engine. The library has a small file size and supports all modern browsers, including IE/Edge 11+.

The only thing that could stop you from using Anime.js right away is its minimal, zen-like documentation. I like the compact, structured, elegant approach it takes, but I think that a more detailed explanation would be helpful. I'll try to fix this issue in this tutorial.

Getting Started With Anime.js

To get started, download and include the anime.js file in your HTML page:

<script src="path/to/anime.min.js"></script>

Alternatively, you can use the latest version of the library hosted on a CDN:

<script src=""></script>

Now, to create an animation, we use the anime() function, which takes an object as an argument. In that object, we describe all the animation details.

let myAnimation = anime({ /* describe the animation details */ });

There are several kinds of properties used to describe the animation. They are grouped into four distinct categories:

Targets - this includes a reference to the element(s) we want to animate. It could be a CSS selector (div, #square, .rectangle), DOM node or node list, or plain JavaScript object. There is also an option to use a mix of the above in an array. Properties - this includes all properties and attributes that can be animated when dealing with CSS, JavaScript objects, DOM, and SVG. Property Parameters - this includes property-related parameters like duration, delay, easing, etc. Animation Parameters - this includes animation-related parameters like direction, loop, etc.

Let's now see how this applies in practice. Consider the following example:

let animation = anime({ targets: 'div', // Properties translateX: 100, borderRadius: 50, // Property Parameters duration: 2000, easing: 'linear', // Animation Parameters direction: 'alternate' });

See the Pen
AnimeJS: Basic Example
by SitePoint (@SitePoint)
on CodePen.

Note: I'm not going to cover the HTML and CSS sections of the code in the tutorial. These tend to be easy to grasp without additional explanation. You can find and explore the HTML and CSS in the embedded pens that follow each example.

In the above example:

We select the green square (the styled div). We move it 100 pixels to the left while transforming it into a circle. We set all this to happen smoothly in two seconds (linear means that no easing will be applied to the animation). By setting the direction property to alternate, we instruct the div element to go back to its initial position and shape after animation completion. Anime.js does that by playing the animation in reverse order.

You may notice that I don't use any units when specifying property values. That's because if the original value has a unit, it is automatically added to the animated value. So, we can safely omit the units. But if we want to use a specific unit we must add it intentionally.

Let's create something more meaningful.

Creating a Pendulum Animation

In this example, we will create a pendulum animation. After we "draw" a pendulum using our HTML and CSS skills, it's time to bring it to life:

let animation = anime({ targets: '#rod', rotate: [60, -60], // from 60 to -60 degrees duration: 3000, easing: 'easeInOutSine', direction: 'alternate', loop: true });

See the Pen
AnimeJS: Pendulum Animation
by SitePoint (@SitePoint)
on CodePen.

In this animation, we use the so-called from-to value type, which defines a range of movement for the animation. In our case, the rod of the pendulum is rotated from 60 to -60 degrees. We also use easeInOutSine easing to simulate the natural motion of pendulum which slows down at peaks and gets faster at the bottom. We use the alternate option again to move the pendulum in both directions and set the loop parameter to true to repeat the movement endlessly.

Well done. Let's move to the next example.

Creating a Battery Charge Animation

In this example, we want to create an animated icon of a charging battery, similar to the icons on our smartphones. This is easily doable with a bit of HTML and CSS. Here is the code for the animation:

The post How to Create Web Animations with Anime.js appeared first on SitePoint.

How to Set Up a Vue Development Environment

Aug 6, 2019


Setting Up a Vue Development Environment

If you’re going to do any serious amount of work with Vue, it’ll pay dividends in the long run to invest some time in setting up your coding environment. A powerful editor and a few well-chosen tools will make you more productive and ultimately a happier developer.

In this post, I’m going to demonstrate how to configure VS Code to work with Vue. I’m going to show how to use ESLint and Prettier to lint and format your code and how to use Vue’s browser tools to take a peek at what’s going on under the hood in a Vue app. When you’ve finished reading, you’ll have a working development environment set up and will be ready to start coding Vue apps like a boss.

Let’s get to it!

Want to learn Vue.js from the ground up? This article is an extract from our Premium library. Get an entire collection of Vue books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month.

Installing and Setting Up Your Editor

I said that I was going to be using VS Code for this tutorial, but I’m afraid I lied. I’m actually going to be using VSCodium, which is an open-source fork of VS Code without the Microsoft branding, telemetry and licensing. The project is under active development and I’d encourage you to check it out.

It doesn’t matter which editor you use to follow along; both are available for Linux, Mac and Windows. You can download the latest release of VSCodium here, or download the latest release of VSCode here and install it in the correct way for your operating system.

Throughout the rest of this guide, for the sake of consistency, I’ll refer to the editor as VS Code.

Add the Vetur Extension

When you fire up the editor, you’ll notice a set of five icons in a toolbar on the left-hand side of the window. If you click the bottom of these icons (the square one), a search bar will open up that enables you to search the VS Code Marketplace. Type “vue” into the search bar and you should see dozens of extensions listed, each claiming to do something slightly different.

The post How to Set Up a Vue Development Environment appeared first on SitePoint.

5 Super CSS Grid Generators for Your Layouts

Aug 5, 2019


CSS Grid has turned out to be the most exciting evolution of CSS for quite a while. It's a specific CSS tool for building any web layout you can think of, from the simplest to the most complex. Today, CSS Grid is widely supported by all major browsers — it's clear that the dark days of hacking layouts using floats are gone forever.

Coding your CSS Grid layout directly in your code editor can be fun. Although the spec is a complex document, the key concepts you would need to build a simple layout don't have a steep learning curve. There are many resources that will get you started in no time, with CSS Master by Tiffany Brown, Rachel Andrew's Grid by Example, and Jen Simmons's Layout Land at the top of the list.

For those of you who feel more comfortable coding layouts using a visual editor, there are several interesting online options that you can try out.

Here are five CSS online tools with great visual interfaces that I'm going to put through their paces. The idea is: design your CSS Grid-based layouts in a few clicks, grab the code and run with it! Let's put this idea to the test and see what happens.

The Test Page Layout

In this article, I'm going to provide this simple hand-coded CSS Grid layout.

See the Pen
responsive CSS Grid example
by Maria Antonietta Perna (@antonietta)
on CodePen.

The layout has more than one HTML container tag working as a Grid container in a nested structure. I could have used the new subgrid feature that's been recently added to Grid, but at the time of writing only Firefox 69+ supports it, and none of the online generators discussed here have implemented this functionality yet.

For most of the CSS Grid generators, I'm going to focus my tests only on the <ul> that works as Grid container for the individual cards. This is what the code looks like:

.kitties > ul { /* grid styles */ display: grid; grid-template-columns: repeat(auto-fit, minmax(320px, 1fr)); grid-gap: 1rem; }

Notice how the value of the grid-template-columns property alone enables you to add responsiveness without media queries by:

using the CSS Grid repeat() function together with the auto-fit property. You can add as many columns as you want and they will fit perfectly into the grid's width, whatever that may be. using the minmax() function, which ensures that each column is at least 320px wide, thereby displaying nicely on smaller screens.

Most CSS Grid generators don't include the ability to set the grid-template-columns using the CSS Grid features above, so you'll need to adjust the values generated by the tool inside media queries to add responsiveness to your layouts.

As I try out the CSS Grid generator tools, I'm going to replace the code above with the code generated by each tool, and examine its capabilities against the results displayed on the screen. The exception will be the fourth CSS Grid generator in the list, a Vue-powered tool by Masaya Kazama. This is because it makes it quite straightforward and quick to build the entire layout, including header and footer, with a few clicks and minor adjustments to one of its preset layouts.

Enough talking, let's dive right in!

1. CSS Grid Generator by Sarah Drasner

Sarah Drasner's CSS Grid generator

CSS Grid Generator is a shiny new generator coded by Sarah Drasner. The interface is super sleek and you can put together a basic CSS Grid layout in no time.

I generated a 2-column grid and dumped the code in my original example. You need media queries to make the layout responsive. Here's the result:

See the Pen
CSS Grid Generator #1 by Sarah Drasner
by Maria Antonietta Perna (@antonietta)
on CodePen.

The code looks like this:

.kitties > ul { /* grid styles */ display: grid; grid-template-columns: 320px 320px; grid-template-rows: 1fr 1fr; /* units for row and column gaps only available in px */ grid-column-gap: 16px; grid-row-gap: 16px; }

This tool lets you:

set the numbers and units of rows and columns drag within the boxes to place divs within them

At the time of writing, Sarah's CSS Grid generator lets you create simple implementations of CSS Grid-based layouts. This is clearly stated by the author:

Though this project can get a basic layout started for you, this project is not a comprehensive tour of CSS Grid capabilities. It is a way for you to use CSS Grid features quickly.

However, since this is a brand new open-source tool, it's still in active development and the community is invited to contribute. Complex features like minmax() are not implemented yet, but they might find their way into it at a later time.

2. LayoutIt by Leniolabs

LayoutIt is quite intuitive and has a few more features than CSS Grid Generator. For example, it lets you set the grid-gap property in px, em and % units, and set grid-template-columns and grid-template-rows using minmax(). However, this is not enough to ensure responsiveness, so you'll still need to adjust your values using media queries.

Also, I found no way of setting the grid-gap property, so you'll have to do it manually if you'd like some white space in between rows and columns.

Here's the result as I entered the generated code into my original example:

See the Pen
CSS Grid Generator #2 by Leniolabs
by Maria Antonietta Perna (@antonietta)
on CodePen.

Below is what the relevant code looks like:

.kitties > ul { /* grid styles */ display: grid; grid-template-columns: minmax(320px, 1fr) minmax(320px, 1fr); grid-template-rows: 1fr 1fr; /* grid gap not in code repeat, auto-fit, and auto-fill not there */ } 3. Griddy by Drew Minns

Griddy CSS Grid generator by Drew Minns

With Griddy you can set the number of columns and rows using fr, px, % and auto units, but there's no minmax() function. You can add gaps to your columns and rows using both px and % and set justify-items and align-items properties to align items within the grid. You'll need media queries for responsiveness.

Below is what the generated code displays on the screen:

See the Pen
CSS Grid Generator #3 by Drew Minns
by Maria Antonietta Perna (@antonietta)
on CodePen.

Here's the generated code in place on the original demo:

The post 5 Super CSS Grid Generators for Your Layouts appeared first on SitePoint.

Mobile Attribution 101: What Every Developer Needs to Know

Aug 5, 2019


Mobile Attribution 101: What Every Developer Needs to Know

This article was created in partnership with StudioWorks. Thank you for supporting the partners who make SitePoint possible.

Optimizing the user experience of your app is crucial for its success, and the best way to do so is by collecting data on how users interact with it. While regular analytic tools do a good job, there’s an even better way now.

Welcome to mobile attribution. This approach to measuring app performance allows you to discover where and how users are interacting with your app and connect them with key points in the app journey.

But what exactly is mobile attribution and how do you use it? Keep reading to learn.

Mobile Attribution Explained

Mobile attribution is the process of measuring two metrics, such as ad spend and app installs. Given that the mobile advertising industry will exceed $244 billion by 2020, you need to know which strategies and channels are wasting your money or generating the most returns.

Mobile attribution also helps mobile app developers and companies determine how users are interacting with apps and mobile ads. This information can then be used to optimize marketing campaigns, the user experience of an app, and more.

In a nutshell, the process looks like this:

How mobile attribution works: user clicks ad; user ID saved to network; user installs app; user opens app; attribution SKD launches; attribution tool collects dataHow mobile attribution works

Not taking advantage of mobile attribution means that you won’t have the most detailed and accurate data on your mobile app and its advertising performance. This can result in missing opportunities when they arise, or discovering problems too late.

Appsflyer: people-based attribution. One view of all devices, platforms and channelsAppsflyer is a market leader in ad attribution and analytics with their proprietary “People-based Attribution” technology

Next, let’s talk more about why you should be using mobile attribution.

The Benefits of Mobile Attribution

These are some of the main benefits you will experience by taking advantage of mobile attribution.

Track user events to optimize your app

You will be able to track every little detail of how users interact with mobile ad campaigns in your app. This includes where they found your app originally, which pages they navigate to, and what features they interact with the most.

This information is priceless. It’s telling you exactly what pages and features users enjoy the most, which channels drive the most traffic, and what campaigns are generating the most results for your app.

You can then invest more capital into profitable channels, push favorite features to the forefront, and improve how every user engages with your application.

The post Mobile Attribution 101: What Every Developer Needs to Know appeared first on SitePoint.

SitePoint Premium New Releases: Smashing 6 + GraphQL & React Native

Aug 2, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Smashing Book 6: New Frontiers In Web Design

It’s about time to finally make sense of all the front-end and UX madness. Meet Smashing Book 6, with everything you need to know about web design. From design systems to accessible single-page apps, CSS Custom Properties, CSS Grid, Service Workers, performance patterns, AR/VR, conversational UIs and responsive art direction.

➤ Read Smashing Book 6: New Frontiers In Web Design.

Working with GraphQL and React Native

In this tutorial, we’re going to demostrate the power of GraphQL in a React Native setting by creating a simple coffee bean comparison app. So that you can focus on all of the great things GraphQL has to offer, Jamie has put together the base template for the application using Expo.

➤ Read Working with GraphQL and React Native.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Smashing 6 + GraphQL & React Native appeared first on SitePoint.

Master Modern JavaScript with This Curated Reading List

Jul 31, 2019


Are you daunted by the complexity of the JavaScript ecosystem? Are you still writing ES5, but looking for an opportunity to embrace modern standards? Or, are you confused by the explosion of frameworks and build tools, and unsure what to learn first? Fear not, here are my handpicked selection of books from SitePoint Premium, intended to help you well on your way to mastering modern JavaScript.

JavaScript: Novice to Ninja, Second Edition

I've placed this book at the top of my list, as it has something for almost everybody. It starts by covering the fundamentals (and thus serves well as a desk reference), then moves on to tackle more advanced topics, such as testing and functional programming.

The second edition has been updated to cover ECMAScript 6 and does a great job of introducing you to its more common features. You also get to put your newly acquired knowledge into practice at the end of each chapter, as you build out a quiz app, adding features as you move through the book. I really like this project-based approach to learning and think it is one of the better ways to advance your programming skills.

For those who just want to dip, I'd recommend reading the Modern JavaScript Development chapter. This will bring you up-to-date with many of the recent developments, such as working with modules, and the hows and the whys of transpiling your code.

Read the book

Practical ES6

This anthology picks up where Novice to Ninja left off and allows you to dive deeper into many of the newer additions to the JavaScript language. It covers much of the basic syntax (e.g. const, let, arrow functions, etc...), and offers a great way to get up to speed in a particular area.

There are also more in-depth articles on topics such as ES6 classes and ES6 modules, as well as a look at what came down the pipeline in ES2017 and ES2018. And if you're starting to get confused about what all these version numbers mean, we've got you covered. The anthology packs a chapter on JavaScript versioning and the process of deciding what gets added to the language.

Read the book

A Beginner's Guide to npm - the Node Package Manager

npm is a package manager for JavaScript, similar to PHP's composer, or Perl's CPAN. It allows you to search an online database of packages (a.k.a. the registry) and install them on your machine. The npm registry is vast — containing over 600,000 packages — and I think it is fair to say that it has revolutionized the way JavaScript developers collaborate with each other.

This short book from our Developer Essentials series has made the list because npm is something you cannot ignore if you are serious about writing JavaScript in 2019. The guide walks you through getting npm installed and configured (which can sometimes be a tad tricky) and using it effectively in your day-to-day work. If you're going to learn just one JavaScript tool in 2019, make it npm. You'll encounter it in tutorials everywhere and it is the standard delivery mechanism for almost any modern JavaScript library out there.

Read the book

JavaScript: Best Practice

Now that we've had a look at the basics, it's time to kick it up a notch with some JavaScript best practices. This anthology is full of tips and tricks to help you write modern JavaScript that is performant, maintainable, and reusable. It's hard to pick favorites from so many great titles, but there are two articles that stand out.

The Anatomy of a Modern JavaScript Application takes a good look at how to build a JavaScript application in 2019. It covers everything from application architecture to deployment and will help you to order many of the concepts and buzzwords you may have heard floating about.

Flow Control in Modern JavaScript introduces you to a variety of strategies for dealing with asynchronous JavaScript in a modern code base. It looks at one of my favorite additions to the language — async await — and dispels the myth that writing a JavaScript web app will automatically land you in callback hell.

Read the book

Node.js Web Development, Fourth Edition

No journey through modern JavaScript would be complete without a look at how to run it on the server. And this book gives you an excellent starting point, bringing you straight to the heart of developing web applications with Node.js.

As you follow along you'll build and iterate on a note taking app. This will form the basis for learning all about real-time applications, data storage, user authentication, deployment with Docker and much more. And even if server-side development isn't your thing, I'd still recommend reading the first couple of chapters. These will give you a good idea where Node fits in to today's JavaScript landscape.

Read the book

The Versioning Guide to Modern JavaScript

To finish we have The Versioning Guide to Modern JavaScript, which is really a large collection of links taken from the much-missed Versioning newsletter. I've included this, as there's so much going on in the world of modern JavaScript development, that I've barely been able to scratch the surface here. I'm confident that this guide will offer you a wealth of ideas and inspiration on what to dig into next.

Read the book

And that's a wrap. I hope this curated list goes some way to helping you navigate the choppy waters of modern JavaScript development.

The post Master Modern JavaScript with This Curated Reading List appeared first on SitePoint.

An Introduction to Data Visualization with Vue and D3.js

Jul 30, 2019


An Introduction to Data Visualization with Vue and D3.js

Web applications are normally data-driven and oftentimes the need arises to visualize this data. That’s where charts and graphs come in. They make it easier to convey information, as well as demonstrate correlations or statistical relationships. Information presented in the form of a chart or a graph is also easier for a non-native speaker to understand.

In this tutorial, we’ll learn how to visualize data in a Vue project. For this, we’ll be using the popular D3.js library, which combines powerful visualization components and a data-driven approach to DOM manipulation.

Let’s get started.

Note: the code for this tutorial can be found on GitHub.

What is D3?

As you can read on the project’s home page, D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG, and CSS. Its emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework.

Whereas most people will refer to D3.js as a data visualization library, it’s not. D3 is more of a framework comprising different parts — such as jQuery parts (which help us select and manipulate DOM elements), Lodash parts, animation parts, data analysis parts, and data visualization parts.

In this tutorial, we’ll be working with the visualization aspect of D3. The real meat of D3 when visualizing data is:

the availability of functions for decorating data with drawing instructions creating new drawable data from source data generating SVG paths creating data visualization elements (like an axis) in the DOM from your data and methods What We’ll Be Building

We want to create an app that lets users search for a repo on GitHub, then get a visual representation of issues opened in the past week that are still open. The end result will look like this:

Final Chart


This tutorial assumes you have a working knowledge of Vue. Previous knowledge of D3.js isn’t required, but if you’d like to get up to speed quickly, you might want to read our D3 by example tutorial.

You’ll also need to have Node installed on your system. You can do this by downloading the binaries for your system from the official website, or using a version manager.

Finally, we’ll be using the following packages to build our app:

Vue CLI — to scaffold out the project D3.js — to visualize our data Lodash — which provides a handful of utility methods Moment JS — for date and time formatting axios — an HTTP client to help us make requests to an external API New Vue Project

I prefer creating new Vue projects using Vue CLI. (If you’re not familiar with Vue CLI, our beginner’s guide in this Vue series gives a full introduction.) Vue CLI provides a nice folder structure for placing different sections of the code, such as styles, components, and so on.

Make sure that the CLI is installed on your machine:

npm install -g @vue/cli

Then create a new project with the following command:

vue create issues-visualization

Note: while creating a new project using Vue CLI, you’ll be prompted to pick a preset. For this particular project, we’ll just stick with the default (Babel + ESLint).

Once our new Vue project has been created, we cd into the project folder and add the various node modules we’ll need:

npm install lodash d3 axios moment

Even though this is a simple app that doesn’t have many running parts, we’ll still take the components approach instead of dumping all the code inside the App.vue file. We’re going to have two components, the App component and a Chart component that we’re yet to create.

The App component will handle fetching data from GitHub, then pass this data to the Chart component as props. The actual drawing of the chart will happen inside the Chart component. Structuring things this way has the advantage that, if you want to use a library other than axios to fetch the data, it’ll be easier to swap it out. Also, if you want to swap D3 for a different charting library, that’ll be easier too.

Building the Search Interface

We’ll start by building a search interface that lets users enter the name of the repo they want to see visualized.

In src/App.vue, get rid of everything inside the <template> tag and replace the content with this:

<template> <div id="app"> <form action="#" @submit.prevent="getIssues"> <div class="form-group"> <input type="text" placeholder="owner/repo Name" v-model="repository" class="col-md-2 col-md-offset-5" > </div> </form> </div> </template>

Here we have a form which, upon submission, prevents the browser’s default submission action, then calls a getIssues method that we’re yet to define. We’re also using a v-model directive to bind the input from the form to a repository property inside the data model of our Vue instance. Let’s declare that property repository as an empty string. We’ll also add a startDate property, which we’ll later use as the first date in our time range:

import moment from "moment"; import axios from "axios"; export default { name: "app", data() { return { issues: [], repository: "", startDate: null }; }, methods: { getIssues() { // code goes in here } } };

Now on to creating the getIssues method:

getIssues() { this.startDate = moment() .subtract(6, "days") .format("YYYY-MM-DD"); axios .get( `${this.repository}+is:issue+is:open+created:>=${this.startDate}`, { params: { per_page: 100 } } ) .then(response => { const payload = this.getDateRange(); => { const key = moment(item.created_at).format("MMM Do YY"); const obj = payload.filter(o => === key)[0]; obj.issues += 1; }); this.issues = payload; console.log(this.issues); }); }

In the above block of code, we start by setting the startDate data property to six days ago and formatting it for use with the GitHub API.

We then use axios to make an API request to GitHub to get all issues for a particular repository that were opened in the past week and that are still open. You can refer to GitHub’s search API if you need more examples on how to come up with query string parameters.

When making the HTTP request, we set the results count to 100 per page (the max possible). There are hardly any repositories with over 100 new issues per week, so this should be fine for our purposes. By default, the per_page value is 30.

If the request completes successfully, we use a custom getDateRange method to initialize a payload variable that we will be able to pass to the Chart component. This payload is an array of objects that will like so:

[ {day: "Dec 7th 18", issues: 0}, {day: "Dec 8th 18", issues: 0}, {day: "Dec 9th 18", issues: 0}, {day: "Dec 10th 18", issues: 0}, {day: "Dec 11th 18", issues: 0}, {day: "Dec 12th 18", issues: 0}, {day: "Dec 13th 18", issues: 0} ]

After that, we iterate over the API’s response. The data we’re interested in is in an items key on a data property on the response object. From this, we take the created_at key (which is a timestamp) and format it as the day property in our objects above. From there, we then look up the corresponding date in the payload array and increment the issues count for that date by one.

Finally, we assign the payload array to our issues data property and log the response.

Next, let’s add in the getDateRange method:

methods: { getDateRange() { const startDate = moment().subtract(6, 'days'); const endDate = moment(); const dates = []; while (startDate.isSameOrBefore(endDate)) { dates.push({ day: startDate.format('MMM Do YY'), issues: 0 }); startDate.add(1, 'days'); } return dates; }, getIssues() { ... } }

Before we get to the visualization bit, let’s also log any errors we might encounter when making our request to the console (for debugging purposes):

axios .get( ...) .then(response => { ... }) .catch(error => { console.error(error); });

We’ll add some UX for informing the user in the case that something went wrong later.

So far, we have an input field that lets the user enter the organization/repository name they wish to search issues for. Upon form submission, all issues opened in the past one week are logged to the console.

Below is an example of what was logged on the console for the facebook/react repo:

Console output

If you start up the Vue dev server using npm run serve and enter some different repos, you should see something similar. If you’re stuck for inspiration, check out GitHub’s Trending page.

Next comes the fun bit — visualizing this data.

Drawing a Bar Chart Using D3

Earlier on, we mentioned that all the drawing will be handled inside a Chart component. Let’s create the component:

touch src/components/Chart.vue

D3 works on SVG elements, and for us to draw anything with D3, we need to have an SVG element on the page. In our newly created component (src/components/Chart.vue), let’s create an SVG tag:

<template> <div> <svg></svg> </div> </template>

For this particular tutorial, we’ll visualize our data using a bar chart. I picked a bar chart because it represents a low complexity visual element while it teaches the basic application of D3.js itself. The bar chart is also a good intro to the most important D3 concepts, while still having fun!

Before proceeding, let’s update our App component to include the newly created Chart component below the form:

<template> <div id="app"> <form action="#" @submit.prevent="getIssues"> ... </form> <chart :issues="issues"></chart> </div> </template>

Let’s also register it as a component:

import Chart from './components/Chart.vue'; export default { name: "app", components: { Chart }, ... }

Notice how we’re passing the value of the issues data property to the Chart component as a prop:

<chart :issues="issues"></chart>

Let’s now update our Chart component to make use of that data:

<script> import * as d3 from "d3"; import _ from "lodash"; export default { props: ["issues"], data() { return { chart: null }; }, watch: { issues(val) { if (this.chart != null) this.chart.remove(); this.renderChart(val); } }, methods: { renderChart(issues_val) { // Chart will be drawn here } } }; </script>

In the above code block, we’re importing D3 and Lodash. We then instantiate a chart data property as null. We’ll assign a value to this when we start drawing later on.

Since we want to draw the chart every time the value of issues changes, we’ve created a watcher for issues. Each time this value changes, we’ll destroy the old chart and then draw a new chart.

Drawing will happen inside the renderChart method. Let’s start fleshing that out:

renderChart(issues_val) { const margin = 60; const svg_width = 1000; const svg_height = 600; const chart_width = 1000 - 2 * margin; const chart_height = 600 - 2 * margin; const svg = d3 .select("svg") .attr("width", svg_width) .attr("height", svg_height); }

Here, we set the height and width of the SVG element we just created. The margin attribute is what we’ll use to give our chart some padding.

D3 comes with DOM selection and manipulation capabilities. Throughout the tutorial, you’ll see lot’s of and d3.selectAll statements. The difference is that select will return the first matching element while selectAll returns all matching elements.

The post An Introduction to Data Visualization with Vue and D3.js appeared first on SitePoint.

The Strategic Advantages of Headless Web Design

Jul 29, 2019


How Your Agency Can Use Headless Web Design as a Strategic Advantage

This article was created in partnership with Duda. Thank you for supporting the partners who make SitePoint possible.

Kentico’s most recent State of the Headless CMS report claims that the concept of the headless content management system is “becoming the industry standard for future-proofing and streamlining content creation.” In fact, the report estimates that by this summer, headless CMS use will have doubled.

But what does that even mean?

If you’re currently using a traditional CMS such as WordPress, Drupal or Joomla for your web development needs, chances are you may have never heard of a headless CMS. But what you might know is that you want to build a one-of-a-kind website for your growing agency that can scale with ease, and that marketing your brand across multiple channels is a must if you want to beat the competition.

In this article, we’re going to share with you what the headless CMS trend is all about and how using this API-powered approach to design and deploy your company’s website can help you get ahead, no matter how competitive your industry is.

So, let’s get started.

What Is a Headless CMS?

To better understand what a headless CMS is, let’s compare a traditional CMS (or “monolithic”, as developer Bret Cameron likes to call it), a decoupled CMS, and a headless CMS.

Image source Traditional CMS

Traditional CMS platforms like WordPress link the front end of your website, called the head, to the back end of your site, where all your content files and databases are stored. The head of the CMS is strictly responsible for presenting your website to site visitors when they click on your site. The back end, on the other hand, not only stores content, but is where website design and customization applications are stored, where content is created, and where management of site functionality occurs.

Paired together, as they traditionally are, the back-end portion of the website relies on the head of the CMS to display the stored content on devices to users.

Decoupled CMS

With a decoupled CMS architecture, the head portion and the back end of the site are split into two separate systems. One system is responsible for content creation and storage, and the other is responsible for presenting the data to users on an interface, such as a website, mobile app, smartwatch, etc.

When content is created on your website using a headless CMS, a RESTful API helps connect the back end to the head, so that the content can be delivered to users on any device or channel with ease.

A RESTful API is a type of application interface using HTTP requests to GET, POST, PUT, and DELETE data that’s requested by users. It allows for multiple data formats such as JSON, HTML, XML, and plain text. It’s ultimately what links the client and server in a decoupled CMS, allowing your site to infinitely scale and deliver content to anyone on any device.

Headless CMS

A truly headless CMS eliminates the head portion altogether, leaving just the back end. In other words, there’s no dedicated system for front-end presentation. And while you might initially wonder if this type of structure might be to your disadvantage, it’s actually the best way to display content to your site visitors on all devices and interfaces, putting your agency in the best possible position to scale.

Here’s a simple breakdown of how it works:

Website owners create content (often in small blocks) in the headless CMS, with no regard for how it will display to users. They also store and manage this content here. An API connects the back end to many different channels and the various engines that power their front ends. The channel or device displays your site’s content.

The way your content will display on the different channels and devices will depend on the frameworks and tools your front-end developers use to act as the “head” portion of your headless CMS.

So it’s more freedom to integrate with more front ends, more scalably and without the risk of breaking anything. How, exactly? Let’s dive a little deeper.

The Advantages of a Headless CMS

APIs work with a headless CMS to do the following:

Reduce Strain. Using a headless CMS, which stores content in a cloud repository as opposed to a server, will leverage less bandwidth, save resources, and reduce the strain your clients’ websites experience.
 Manage and Store Content
With a headless CMS, all content, including written text and images, are stored in the back end of the database. With a traditional CMS, not only is content stored and managed here, but so are front-end templates, CSS, and plugins for front-end functionality. Separating the back end from the front end means you can upgrade and customize your website without compromising site speed or performance — since all your client needs to worry about is managing and storing content. Third-party Integrations.

A headless CMS gives you the chance to use third-party systems to trigger, write and read content for you, making development less disruptive. It also gives developers the flexibility to use the front-end framework they prefer to display their site content, focusing more on content creation and less on content management.

Lastly, a headless CMS protects you, your company and your website’s content from future technological advances. After all, platforms and technology are always evolving, making it challenging to keep up.

For example, think about all the problems that those of us who didn’t build responsive websites had when mobile-friendliness became a necessity. People all over, regardless of how well-established and successful they were at the time, had to change everything to ensure a seamless mobile experience.

And there’s no end in sight to the trends — from artificial intelligence to augmented reality to voice assistants — that have the power to change the way you build and deliver digital content experiences. Traditional CMS platforms were designed with website publishing in mind. They were not built with social media product listings, smartwatch apps or talking speakers in mind.

So when the build is decoupled from the delivery, you’re in the best possible position to experiment with new channels and formats on an agile basis. And this is a key aspect to any agency’s value proposition.

If you take advantage of the headless CMS approach and use APIs to deliver your content to where it needs to surface, it won’t matter what changes. That’s because as long as an API-fed front end can be built, your client’s content can be configured to render properly.

Headless CMS Limitations

Though it might seem as though a headless CMS is the answer to all your website problems, be aware that there are some downsides preventing traditional CMS users from making the switch:

Limited Editing UI.

When compared to traditional CMSs, headless CMSs usually lack the flexibility that content managers generally rely on to optimize the content for specific front end uses. If you need to use your CMS for creating landing pages or even article page layouts, the lack of a “Preview” button might be an issue. There is no WYSIWYG page editor on these platforms, since the whole point of going headless is that it won’t render HTML, which makes designing medium-specific content experiences more difficult.
 Lack of Built-in Features
 Boris Kraft, CTO and co-founder of Magnolia CMS, reminds people that a traditional CMS will generally come with features like “asset management, navigation, security, workflow, access control, caching, categorization and link management.” In fact, he goes on to say that while a headless CMS does provide companies with more flexibility, many often get lost in the hype and forget that “I have to write, debug and maintain everything I need myself” with a headless CMS solution.

While there is no CMS solution that satisfies all needs, it’s worth noting that the headless CMS approach can be beneficial when used in a hybrid situation.

In fact, if you use a solution such as Duda, a leading web design platform for companies providing web design services to others, and take advantage of APIs to deliver content and handle your site’s structure and layouts on multiple channels, you can get the best of both worlds.

The post The Strategic Advantages of Headless Web Design appeared first on SitePoint.

SitePoint Premium New Releases: React + React Native

Jul 26, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Get Started with React Native

For developers with a web background, frameworks such React Native allow you to create mobile apps with languages you’re already familiar with: HTML, XML, CSS, and JavaScript. This guide will help get you up and running with React Native.

➤ Read Get Started with React Native.

Build Your Own React Universal Blog App

An introductory course to building your first universal React app. Starting with an introduction to React, and then getting familiar with the invaluable React toolkit - Create React App, we'll then walk you through the steps of creating a universal React blog app from scratch.

➤ Read Build Your Own React Universal Blog App.

And More to Come…

We're releasing new content on SitePoint Premium almost every day, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: React + React Native appeared first on SitePoint.

A Beginner’s Guide to Feathers.js

Jul 25, 2019


A Beginner's Guide to Feathers

In this article, you’ll learn how to build a RESTful API Server in Node.js using Feathers.

An API server, also known as an Application Server, is a program that provides data to front-end applications. It also handles business logic in the back end and provides restricted access to an organization's database. It doesn't just prevent unauthorized persons from accessing the data; it can also restrict logged-in users from accessing or altering data if they don't have permission to do so.

Every application you build will need to provide a service to its end users. For that, your application will need data to process. You can use remote APIs to create a new service. For most applications, though, you’ll need to manage your own data store. A popular option is to use online data storage services such as Firebase. This way, you don't have to deal with the nitty gritty details of running a distributed database server. However, your project needs may require the use of a full-fledged, in-house database management system such as MongoDB or Oracle. For your front-end application to access the data stored in the database, you’ll need a server application that sits between the database and the front-end application.


As illustrated in the diagram above, the work of an application server is to access data from a database using SQL or NoSQL commands and convert into a format that front-end applications (client browser) can understand — such as JSON. In addition, the application server can use various security protocols — such as HTTPS encryption and token authorization — to ensure that communication between the database and the client application is safe and secure. One main advantage of using such an architecture is that you can deploy applications that target different platforms — desktop, mobile, web, and so on — using the same application server. It’s also very easy to scale your application horizontally in order to serve more users efficiently with fast response times.

We’re going to build a simple API server and demonstrate the various features that Feathers provides.


Before you begin following this tutorial, you’ll need to have a good foundation in the following topics:

ES6 JavaScript creating Express apps creating RESTful APIs with Express

Feathers is built on top of Express, a minimalist web framework for Node.js. If you’ve completed the tutorials demonstrated in the links, you’ll realize that it's quite tiring building RESTful APIs using just Express. With Feathers, most of the repetitive work is already done for you. You only need to focus on configuring and customizing code. Let's dive into the code and learn how this web framework works.

Project Creation

To get started with Feathers, you’ll need to install its command line application globally:

npm install -g @feathersjs/cli

Next, create a new API project using the commands below:

mkdir contacts-api cd contacts-api feathers generate app

Below are the options I chose. Feel free to choose any testing framework. Unfortunately, testing is beyond the focus of this article, so it won't be covered here. Personally, I like simplicity, and that’s why I went with Jest.

Creating a Feathers app in the command line

Once the installation is complete, you can open you favorite code editor to look at the project files.

Project structure, as seen in a code editor

If you’ve completed the Express tutorials I listed in the prerequisites section, you shouldn't be intimidated by the generated code. Here's a brief summary that describes the folders and files.

The created files

Don't be too concerned with what each file does right now. You’ll come to understand how they work in the course in this tutorial. For now, let's confirm that the tests are working.


To ensure our project is compliant with the defined ESLint rules, just run the command npm test. If you’re on a Unix or Linux platform, this should run fine. If you’re on Windows, there are few things you need to adjust for the tests to run successfully.

First, go to package.json and look at the scripts section. Change the test line to this:

"scripts": { "test": "npm run eslint && SET NODE_ENV= npm run jest", },

Next, if you’ve installed Prettier in Visual Studio Code, you'll need to change the single quote setting to true in the Workspace settings tab:

{ "prettier.singleQuote": true }

Finally, make sure that, when you create or edit any file, the line ending is LF. If you’re using Visual Studio Code or a similar editor, you can check the current line ending style at the status bar. If it says CRLF, change to LF. Making those changes will help you pass the lint tests. Unfortunately, to make the tests pass will require a bit more work, which won't be covered here.

Let's look at how we can generate a CRUD RESTful interface.

Generate Service

Building a Restful CRUD API interface in Express requires a bit of work. In Feathers, all you have to do is execute a single command, answer a few questions and have the code generated for you:

$ feathers generate service ? What kind of service is it? NeDB ? What is the name of the service? contacts ? Which path should the service be registered on? /contacts ? What is the database connection string? nedb://../data force config\default.json create src\services\contacts\contacts.service.js force src\services\index.js create src\models\contacts.model.js create src\services\contacts\contacts.hooks.js create test\services\contacts.test.js

We’ll be using NeDB database for this tutorial. Feathers does support both SQL databases such as MySQL and NoSQL databases such as MongoDB. However, installing a database system — whether on your machine or on a cloud server — requires a certain amount of time configuring it. NeDB, on the other hand, is an in-memory database that’s 100% JavaScript and supports a subset of MongoDB API. There’s no configuration needed; you just install it. It's a great database for prototyping and testing new applications. This is what we’ll use in this tutorial.

Let's briefly look at some of the files that have been generated using this command:

services/contacts/contact.service.js. This is a Feathers service that provides the CRUD API endpoints for /contacts. Pretty small, isn't it? This is because Feathers does the heavy lifting for us. It saves us from writing boilerplate CRUD code.

services/contacts/contact.hooks.js. This is where we customize how the CRUD logic behaves. We have the before section, where we can check or change data before Feathers reads or writes to the database. We also have an after section, where we can check or change the results from the database before it’s sent to the client application. We can do things like restricting access, data validation, performing join operations and calculating values for additional fields or columns.

models/contacts.model.js. This where we define a model and attach it to a database table. This is also where we define a schema which can be used to validate fields when a new record is inserted or updated. Unfortunately, NeDB doesn’t support schemas. However, I've provided an example of a model that’s connected to MongoDB, which supports the schema feature via the mongoose adapter:

"use strict"; const mongoose = require("mongoose"); const Schema = mongoose.Schema; require("mongoose-type-email"); const contactsSchema = new Schema({ name: { first: { type: String, required: [true, "First Name is required"] }, last: { type: String, required: false } }, email: { type: mongoose.SchemaTypes.Email, required: [true, "Email is required"] }, phone: { type: String, required: [true, "Phone is required"], validate: { validator: function(v) { return /^\+(?:[0-9] ?){6,14}[0-9]$/.test(v); }, message: "{VALUE} is not a valid international phone number!" } }, createdAt: { type: Date, default: }, updatedAt: { type: Date, default: } }); const contactsModel = mongoose.model("contacts", contactsSchema); module.exports = contactsModel;

Despite the limitations of using NeDB, it’s still a great database for prototyping. Most NoSQL databases will allow you to submit data using any structure without having to define a schema first. It’s wiser to implement a schema once the project requirements have been realized. With a schema in place, Feathers will perform field validation for you using the rules you’ve defined. You'll need a production-ready database such as MongoDB to be able to define a schema. Do note the configuration for the development database is defined at config/default.json:

"nedb": "../data"

This is where database credentials are provided. We also have another config file called config/production.json. This is the production database configuration that’s used when you deploy your Feathers app. It's important to use a separate database during development. Otherwise, you run the risk of deleting or corrupting business operational data on the production database.

Now that we have our CRUD service for contacts set up, it's time to take it for a spin. You can start the Feather server using the command npm start. Do note that this server doesn’t support hot reloading. So you'll need to restart it every time you make a change to the code. In order to interact with our Feathers app, we’ll need an API browser tool such as Postman or Insomnia. I'll be using Insomnia in this tutorial, but you can follow along easily with Postman or any other tool.

Create a new GET request (press Ctrl + N) and give it the title “List Contacts”. In the URL section, enter http://localhost:3030/contacts. When you hit the Send button, you should have the following view:

Listing contact requests: the view in your code editor after hitting the Send button

Nothing! Our database is currently empty, so we need to create some new contacts. Create a new request called Create Contact. Fill in the rest of the fields as shown below:

Form for creating the new contact

In case you forgot to change the METHOD to POST in the above form, you can do so later. Change the method to POST and change the Body tab to JSON. Copy the following data in the JSON tab:

{ "name": { "first": "Jack", "last": "Bauer" }, "email": "jack@ctu.mail", "phone": "+1234567" }

When you hit the Send button, you should get the following response. Notice that an _id has been generated for your new contact.

Response with new ID

Go back to List Contacts and hit the Send button again. You should get the following result:

{ "total": 1, "limit": 10, "skip": 0, "data": [ { "name": { "first": "Jack", "last": "Bauer" }, "email": "jack@ctu.mail", "phone": "+1234567", "_id": "ybnRxL6s2QEGhj4i" } ] }

Go back to Create Contact and post a couple of new records:

{ "name": { "first": "Chloe", "last": "O'Brian" }, "email": "chloe@ctu.mail", "phone": "+1987654" } { "name": { "first": "Renee", "last": "Walker" }, "email": "renee@fbi.mail", "phone": "+150505050" }

Let's now perform an update. For this, we won't use the UPDATE HTTP method. This method will completely overwrite a record. What we want to do is just overwrite a single field, not the the whole record. For that, we’ll use PATCH. Create a new request, Update Contact as illustrated below:

Updating a contact

In the URL field, put http://localhost:3030/contacts/{_id}. Replace {_id} with the ID of the first record. Place the following data into the JSON tab:

{ "email": "" }

Hit the Send button. You should get the following result:


Notice how the the rest of the fields remain intact. Next, we’re going to delete a record. This one is easy. Just create a new DELETE request and name it Delete Contact. In the URL field, use the format http://localhost:3030/contacts/{_id}. Just like before, replace {_id} with the ID of the record you want to delete. Hitting Send will delete that record for you. You can confirm by running the List Contact request again.

We've just verified that all CRUD operations are running okay. In the next section, we’ll learn how to set up authentication.

The post A Beginner’s Guide to Feathers.js appeared first on SitePoint.

The Best Web Hosting Providers For Your Needs

Jul 25, 2019


Hosting Providers: A Comparison

In this article, we're going to dive into the offerings of the most prominent players in the hosting industry, and wade through their plans, infrastructure, and reputation among users, to give you recommendations for the best hosting provider for your needs.

We divided hosting vendors into three categories:

general-purpose shared hosting providers WordPress — specialized premium providers, and unmanaged, dedicated server solutions General-purpose Shared Hosting Providers

This category covers the widest part of the hosting market — from the entry-level shared plans to prosumer and premium offerings with dedicated resources and different levels of support.

A2 Hosting

A2 Hosting home page

A2 Hosting was founded in 2003 in Michigan, USA. They offer entry-level shared hosting packages, managed VPS and dedicated servers and seller hosting.

Their shared packages are standard cPanel packages, with SSD-based storage and a promise of reserved CPU and RAM resources — even with the smallest of plans.

Their LITE plan begins at $2.96 per month, which allows for one website. Their SWIFT plan has unlimited websites, starting with $3.70 per month. And their TURBO plan starts at $7.03 per month. These prices are — as the A2 website claims — a 63% discount.

They offer free migration, 24/7 support, DDOS protection and “99.99% uptime commitment”, which is always nice to know.

A2 developer features

The flagship plan that distinguishes A2 is the TURBO package, which uses LiteSpeed servers — an excellent replacement for Apache, which also brings with it a performance edge.

Underlying LiteSpeed allows A2 to offer QUIC as an experimental feature on the TURBO plan.

A2 offers four possible server locations — two data centers in the US, one in Singapore and one in Amsterdam, so they'll have you covered in most parts of the world.

For more details check out their website.


SiteGround home page

SiteGround is a “holding of companies registered in the USA, UK, Bulgaria, and Spain that manages four offices and several datacenter locations around the world”. Established in 2004, it has become somewhat of an institution in the hosting world that’s been recommended by WordPress itself.

SiteGround’s hosting range is similar to A2’s, and starts from the entry-level shared packages all the way to cloud products and dedicated servers. The company also offers bespoke enterprise solutions.

SiteGround offers packages similar to A2’s plans — although its GrowBig and GoGeek plans cost a bit more than a comparable A2 plans.

SiteGround plans

However, SiteGround does offer free daily backups across all the plans, free migration on GrowBig and GoGeek — as well as staging installation, which is handy for more professional setups.

Use of resources for each plan is limited in terms of server processes, simultaneous connections, CPU seconds, RAM, cronjob intervals — and they are specific and transparent about it, which is a plus.

One caveat to note is that the prices mentioned above are promotional prices for the first year of signup, and renews at a higher price after that.

SiteGround's server stack consists of NGINX as a caching solution in front of Apache — which means that even SiteGround's StartUp plan is not constrained by Apache limitations.

Screenshot of a Facebook reply from SiteGround

We use NGINX as a reverse proxy for caching, standing in front of an Apache server. This means that cached results get zero latency from the web server, PHP or MySQL services. The content comes our directly from the server's memory. It's as if you're fetching an HTML file — mostly networking between you and the server, and of course the size of your content. — Hristo Pandjarov

They offer server locations in the UK, USA, continental Europe and Asia (Singapore).

SiteGround has also recently announced support for the QUIC next-generation protocol, which will increase site loading speed even if there’s poor internet connectivity at your location.

For more details check out their website.


Bluehost was founded by Matt Heaton in 2003, and enjoyed immense popularity until 2011, when it was acquired by EIG. Since the acquisition, it has remained one of the most prominent players in the hosting scene, but it has been plagued by mixed reviews regarding its user experience.

Bluehost home page

Bluehost offers a similar range of products to the others — from entry-level shared plans to VPS, dedicated servers, ecommerce and WordPress packages. But it currently offers the lowest prices of the three mentioned in this category (for a 36-month term; prices go up as you'd expect for shorter terms).

Bluehost packages

All but the smallest of these plans include unlimited resources (SSD Storage). However, do note that in the fine print of their Terms of Service notes:

While rare, we occasionally constrain accounts utilizing more resources than should be the case in the normal operation of a personal or small business website.

Negative Bluehost facebook post

Hosting Recommendations Please

We're looking for fast, reliable + scalable hosting for 100+ WordPress sites (of varying sizes) with WHM + cPanel.

Been migrating to Bluehost Dedicated server and it's just not cutting it. Support is slow and the server has been dropping in and out over the past 24 hours + turns out the resources are capped (despit what “sales” said).

“No specified limits to resources” in this case may translate to “no guaranteed resources”. Even before the EIG acquisition, Bluehost had introduced a CPU-throttling strategy — so be warned. You can read more about Bluehost's restrictions regarding resources here.

On its bigger plans, Bluehost offers SSH access, cron jobs, daily backups etc. But it’s mostly oriented toward the entry-level market.

Bluehost has some perks and a custom management web UI for WordPress customers.

As for the server locations — Bluehost doesn’t offer straightforward information, but besides Utah, USA, they seem to have data centers in China and India (Mumbai).

The post The Best Web Hosting Providers For Your Needs appeared first on SitePoint.

10+ Top Vue.js Tools & Libraries

Jul 23, 2019


Vue continues to grow in popularity and is rapidly being adopted by many developers, and Vue.js tools are popping up everywhere. This is not without reason: Vue's shallow learning curve, clear functionality-driven structure, and excellent documentation make it easy for novices to pick it up, and for more experienced developers to make a switch from other frameworks like React or Angular.

If you are serious about Vue development, sooner or later you'll meet some fundamental tools and libraries which stand out from the crowd. Using them will level up your career as a Vue developer, and make you feel like a professional.

I've compiled a list of the most notable tools and libraries you should know and eventually use in your Vue.js projects. Unlike many other articles out there, which list only UI component libraries, this compilation explores a much broader mixture of tools, libraries, and plugins in the Vue ecosystem.

I've selected these based on their usefulness, effectiveness, and uniqueness — not their GitHub popularity or star ratings.

Enough talk: here they are, the top ten.


It seems that having some kind of CLI tool is a must for every JavaScript application framework these days. Vue is no exception. Vue CLI is a fully-featured set of tools for rapid Vue development. Besides the usual project scaffolding, it allows you to experiment with new ideas even without creating a full project, by using its instant prototyping feature.

By default, Vue CLI offers support for the major web development tools and technologies, such as Babel, TypeScript, ESLint, PostCSS, PWA, Jest, Mocha, Cypress, and Nightwatch. This is possible thanks to its extensible plugin system. This means the community can build and share reusable plugins for common needs.

But the icing on the cake is the powerful GUI (Vue UI, which comes with the CLI) which allows you to create your project easily, and then configure and manage it along the way without the need for ejection.



The next big player in Vue's ecosystem is VuePress, a Vue-powered static site generator. Initially created as a tool for writing technical documentation, now it's a small, compact, and powerful headless CMS. Since version 1.x, it has offered great blogging features and a powerful plugin system. It comes with a default theme (tailored to technical documentation), but you can also build custom themes or use a pre-made option from the community.

In VuePress, you write the content in Markdown, which is then transformed to pre-rendered static HTML files. Once those files are loaded, your site runs as a single-page application powered by Vue, Vue Router and Webpack.

One of the main benefits of VuePress is that you can include Vue code or components within your Markdown files. This gives you great power and flexibility because you can develop your site almost like a regular Vue app, with all benefits that come from that.



Gridsome has many similarities with VuePress but it takes a different and very powerful approach when dealing with data sources. It allows you to connect and use many different kinds of data in your app, which are then unified in one GraphQL layer. Basically, Gridsome uses Vue for front-end functionality and GraphQL for data management. The way this works can be summarized in the following three steps:

The post 10+ Top Vue.js Tools & Libraries appeared first on SitePoint.

How to Create Websites for Business Startups

Jul 23, 2019


This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Calling all web designers: We've got some good news for you.

New startups are appearing around the globe thanks to opportunities created through the new global economy. The good news is this:

New startups lead to a growing need for more business websites. In turn, this means more work for web designers.

These aren't isolated neighborhood businesses either. Their outreach is global and digital. They need websites that are optimized for conversions and flexible. So, here's your chance. You can serve these new entrepreneurs by creating great websites for them.

How do you go about it?

Below are 5 simple steps to create eye-catching, high-converting websites, with helpful examples.

5 Steps to Building Astonishing Business Startup Websites Step 1: Choose a mesmerizing color palette

There are 3 simple rules to follow when choosing a color palette for a business startup website:

You want a color palette that instantly attracts attention It needs to be on-brand It needs to visually support the message you're trying to get across

Over's bold color touches instantly attract attention.

This pre-built website, included in the Be Theme library, offers another example of an attention-grabbing color palette.

Forest does an excellent job at aligning its color palette with its brand — earth tones such as greens and browns are throughout. These palettes contribute to a visually memorable website.

FlightCard's subtle and cold color palette is a perfect choice for reinforcing its message: boarding passes have become so easy to use that you barely pay attention to them.

And if you're in need of a color palette that will appeal to a larger audience, BeApp2 can serve you perfectly well.

Step 2: Display crystal-clear product pics

This shouldn't be difficult to figure out. People want to know exactly what they are being offered. It has little to do with the color of the icons you're using.

A business startup has to present its products with flair and dignity. Its product pics need to give it an extra edge over huge businesses that have budgets to match.

Pennies proudly displays what its award-winning app looks likes on a smartphone. Just by looking at the page you can tell how easy it is to use and even figure out how the color codes work.

You can create a similar website with BeWallet. This pre-built website was designed from scratch for financial service startups. There's a huge audience of them in dire need of a better website.

Or, you could choose a more general pre-built website like BeSoftware to display a new business startup's crystal-clear pics.

JibJab takes things a step further. Its before and after approach is a persuasive and simple tactic to show what the startup offers.

Step 3: Show visitors how the startup serves them

The key here is to help visitors imagine themselves as actual customers by showing how the startup will serve them.

PeekCalendar has placed a video in the hero section. It shows how people can benefit from the new business.

BeApp3 allows you to incorporate product pics and a video to show how the new startup will benefit people using its products and services.

BePay has a Watch Video CTA button right above the fold that encourages your visitors to immediately view a product demo.

CutestPaw will soften up the hardest heart. Its entire how-to is cleverly rolled up into one single image: a hero shot showing how the new business will meet users' specific needs.

Step 4: Use whitespace

You can overuse whitespace, but often the more there is, the better. Whitespace is the #1 visual design element for a startup website. You want to be able to work with as much space as possible to highlight your main messages and product shots.

Tha Fly Nation's clean, airy design is pleasing to the eye and allows it to focus on the most important elements.

You can create a similar startup website with BeProduct4 or BeHosting2. These are pre-built websites where blank space is used to enhance user experience and emphasize the critical elements on the page.

SpellTower is perhaps the most extreme when it comes to whitespace. It is part of their brand and its minimalist design drives the message home.

Step 5: Design CTAs that grab users by the eyeballs

An easy-to-locate CTA button doesn't always cut it. If you want to convert visitors into users, you want a CTA button that's impossible to ignore – big, bright, and bold. It's so bright that a visitor feels compelled to click on it!

Wire's CTA button clearly stands out above the fold. It has a bold color, it's big enough to draw your attention once you've read the headline, and it's centered on the page so it acts as a "gate" that must be clicked to move forward.

BeERP uses the same bright-green button for its main CTA (the one you want your visitors to click). A secondary, plain button is to the right.

You can also direct attention to your CTA by having it match other elements on the page. A great example is BeKids. Here, the blue CTA button matches the visual elements in the hero section.


Follow these surefire steps so you can present your client with a website that is both eye-catching and built for conversion. Websites that feature stunning visuals and clever uses of white space create the kind of visitors a startup needs.

If you're fortunate to grab a good share of new business, you could easily find yourself overloaded. In this case, you can safely use websites that have been built from the ground up specifically for businesses.

You'll find the most generous gallery on Be Theme and its library of over 450+ pre-built websites to choose from. Pre-built websites you can customize to your liking.

These pre-built websites are functional, visually impressive, and feature interactive elements, stunning effects, and intuitive navigation. Simply personalize the one you select to fit your business or that of your client, and you're good to go!

The post How to Create Websites for Business Startups appeared first on SitePoint.

How to Build a Vue Front End for a Headless CMS

Jul 22, 2019


In this guide, we’ll learn how to build a modern blog website using Vue.js and GraphCMS, a headless CMS platform.

If you’re looking to start a quick blog today, my recommendation is to go straight to WordPress.

But what if you’re a media powerhouse and you want to deliver your content as fast as possible to multiple devices? You’ll probably also need to integrate your content with ads and other third-party services. Well, you could do that with WordPress, but you’ll come across a few problems with that platform.

You’ll need to install a plugin to implement additional features. The more plugins you install, the slower your website becomes. PHP is quite slow compared to most JavaScript web frameworks. From a developer’s perspective, it’s much easier and faster to implement custom features on a JavaScript-powered front end.

JavaScript offers superior performance to PHP in browser loading tests. In addition, modern JavaScript and its ecosystem provides a far more pleasant development experience when it comes to building new web experiences fast.

So there’s been a growth of headless CMS solutions — which are simply back ends for managing content. With this approach, developers can focus on building fast and interactive front ends using a JavaScript framework of their choice. Customizing a JavaScript-powered front end is much easier than making changes on a WordPress site.

GraphCMS differs from most Headless CMS platforms in that, instead of delivering content via REST, it does so via GraphQL. This new technology is superior to REST, as it allows us to construct queries that touch on data belonging to multiple models in a single request.

Consider the following model schema:


id: Number title: String content : String comments : array of Comments


id: Number name: String message: String

The above models have a one(Post)-to-many(Comments) relationship. Let’s see how we can fetch a single Post record attached with all linked Comment records.

If the data is in a relational database, you have to construct either one inefficient SLQ statement, or two SQL statements for fetching the data cleanly. If the data is stored in a NoSQL database, you can use a modern ORM like Vuex ORM to fetch the data easily for you, like this:

const post = Post.query() .with('comments') .find(1);

Quite simple! You can easily pass this data via REST to the intended client. But here’s the problem: whenever the data requirement changes at the client end, you’ll be forced to go back to your back-end code to either update your existing API endpoint, or create a new one that provides the required data set. This back and forth process is tiring and repetitive.

What if, at the client level, you could just ask for the data you need and the back end will provide it for you, without you doing extra work? Well, that’s what GraphQL is for.


Before we begin, I’d like to note that this is a guide for intermediate to advanced users. I won’t be going over the basics, but rather will show you how to quickly build a Vue.js blog using GraphCMS as the back end. You’ll need to be proficient in the following areas:

ES6 and ES7 JavaScript Vue.js (using CLI version 3) GraphQL

That’s all you need to know to get started with this tutorial. Also, a background in using REST will be great, as I’ll be referencing this a lot. If you’d like a refresher, this article might help: “REST 2.0 Is Here and Its Name Is GraphQL”.

About the Project

We’ll build a very simple blog application with a basic comment system. Below are the links you can visit to check out the completed project: demo GitHub repo

Please note that a READ-ONLY token has been used in the demo and consequently the comments system won’t work. You’ll need to supply your OPEN permission token and endpoint as per the instructions in this tutorial for it to work.

Create GraphCMS Project Database

Head over to the GraphCMS website and click the “Start Building for Free” button. You’ll be taken to their signup page.

Signing up to GraphCMS

Sign up using your preferred method. Once you’ve completed the account authentication and verification process, you should be able to access the main dashboard.

The GraphCMS main dashboard

In the above example, I’ve already created a project called “BlogDB”. Go ahead and create a new one, and call it whatever you want. After you’ve entered the name, you can leave the rest of the fields in their defaults. Click Create and you’ll be taken to their project plan.

GraphCMS plans

For the purposes of this tutorial, select the free Developer plan then click Continue. You’ll be taken to the project’s dashboard, which looks something like this:

The GraphCMS project dashboard

Go to the Schema tab. We’re going to create the following models, each with the following fields:


name: Single line text, required, unique


slug: Single line text, required, unique title: Single line text, required, unique content: Multi line text


name: Single line text, required message: Multi line text, required

Use the Create Model button to create models. On the right side, you should find a hidden panel for Fields, which is activated by clicking the Fields button. Drag the appropriate field type onto the model’s panel. You will be presented with a form to fill in your field’s attributes. Do note at the bottom there’s a pink button labeled Advanced. Clicking it will expand the panel to give you more field attributes you can enable.

Field attributes in the Advanced tab

Next, you’ll need to add the relationship between models as follows:

Post > Categories (many-to-many) Post > Comments (one-to-many)

Use the Reference field to define this relationship. You can add this field to any side; GraphCMS will automatically create the opposite relation field in the referenced model. When you’ve completed defining the models, you should have something like this:

GraphCMS models

You’ve now completed the first part. Let’s now provide some data to our models.

GraphQL Data Migration

To add content to your models, you can simply click the Content tab in the project dashboard where you can create new records for each of your models. However, if you find this to be a slow method, you’ll be happy to know that I’ve created a GraphCMS migration tool that copies data from CSV files and uploads them to your GraphCMS database. You can find the project here in this GitHub repository. To start using the project, simply download it into your workspace like this:

git clone cd graphcsms-data-migration npm install

Next, you’ll need to grab your GraphCMS project’s API endpoint and token from the dashboard’s Settings page. You’ll need to create a new token. For the permission level, use OPEN, as this will allow the tool to perform READ and WRITE operations on your GraphCMS database. Create a file called .env and put it at the root of the project:

ENDPOINT=<Put api endpoint here> TOKEN=<Put token with OPEN permission here>

Next, you may need to populate the CSV files in the data folder with your own. Here’s some sample data that has been used:

// Categories.csv name Featured Food Fashion Beauty // Posts.csv title,slug,content,categories Food Post 1,food-post-1,Breeze through Thanksgiving by making this Instant Pot orange cranberry sauce,Food|Featured Food Post 2,food-post-2,This is my second food post,Food Food Post 3,food-post-3,This is my last and final food post,Food Fashion Post 1,fashion-post-1,This is truly my very first fashion post,Fashion|Featured Fashion Post 2,fashion-post-2,This is my second fashion post,Fashion Fashion Post 3,fashion-post-3,This is my last and final fashion post,Fashion Beauty Post 1,Beauty-post-1,This is truly my very first Beauty post,Beauty|Featured Beauty Post 2,Beauty-post-2,This is my second beauty post,Beauty

You can change the content if you want. Make sure not to touch the top row, as otherwise you’ll change the field names. Please note, for the column categories, I’ve used the pipe | character as a delimiter.

To upload the CSV data to your GraphCMS database, execute the following commands in this order:

npm run categories npm run posts

Each script will print out records that have uploaded successfully. The reason we uploaded categories first is so that the posts records can link successfully to existing category records.

If you want to clean out your database, you can run the following command:

npm run reset

This script will delete all your model’s contents. You’ll get a report indicating how many records were deleted for each model.

I hope you find the tool handy. Go back to the dashboard to confirm that data for the Posts and Categories have successfully been uploaded.

With the back end taken care of, let’s start building our front-end blog interface.

Building the Blog’s Front End Using Vue.js

As mentioned earlier, we are going to build a very simple blog application powered by a GraphCMS database back end. Launch a terminal and navigate to your workspace.

If you haven’t got Vue CLI installed, do that now:

npm install -g @vue/cli

Then create a new project:

vue create vue-graphcms

Choose to manually select features, then select the following options:

Features: Babel, Router Router History Mode: Y ESLint with error prevention only Lint on save Config file placement: Dedicated Config Files Save preset: your choice

Once the project creation process is complete, change into the project directory and install the following dependencies:

npm install bootstrap-vue axios

To set up Bootstrap-Vue in our project, simply open src/main.js and add the following code:

import BootstrapVue from "bootstrap-vue"; import "bootstrap/dist/css/bootstrap.css"; import "bootstrap-vue/dist/bootstrap-vue.css"; Vue.config.productionTip = false; Vue.use(BootstrapVue);

Next, we need to start laying down our project structure. In the src/components folder, delete the existing files and create these new ones:

CommentForm.vue CommentList.vue Post.vue PostList.vue

In the src/views folder, delete About.vue and create a new file called PostView.vue. As seen from the demo, we’ll have several category pages each displaying a list of posts filtered by category. Technically, there will only be one page that will display a different list of posts based on an active route name. The PostList component will filter posts based on the current route.

Let’s first set up the routes. Open src/router.js and replace the existing code with this:

import Vue from "vue"; import Router from "vue-router"; import Home from "./views/Home.vue"; import Post from "./views/PostView.vue"; Vue.use(Router); export default new Router({ mode: "history", base: process.env.BASE_URL, linkActiveClass: "active", routes: [ { path: "/", name: "Featured", component: Home }, { path: "/food", name: "Food", component: Home }, { path: "/fashion", name: "Fashion", component: Home }, { path: "/beauty", name: "Beauty", component: Home }, { path: "/post/:slug", name: "Post", component: Post } ] });

Now that we have our routes, let’s set up our navigation menu. Open src/App.vue and replace the existing code with this:

<template> <div id="app"> <b-navbar toggleable="md" type="dark" variant="info"> <b-navbar-toggle target="nav_collapse"></b-navbar-toggle> <b-navbar-brand href="#">GraphCMS Vue</b-navbar-brand> <b-collapse is-nav id="nav_collapse"> <b-navbar-nav> <router-link class="nav-link" to="/" exact>Home</router-link> <router-link class="nav-link" to="/food">Food</router-link> <router-link class="nav-link" to="/fashion">Fashion</router-link> <router-link class="nav-link" to="/beauty">Beauty</router-link> </b-navbar-nav> </b-collapse> </b-navbar> <b-container> <router-view/> </b-container> </div> </template>

This will add a nav bar to the top of our site with links to our different categories.

Save the file and update the following files accordingly:


<template> <div class="home"> <PostList /> </div> </template> <script> import PostList from "@/components/PostList.vue"; export default { name: "home", components: { PostList } }; </script>


<template> <section class="post-list"> <h1>{{ category }} Articles</h1> <hr/> <p>Put list of posts here!</p> </section> </template> <script> export default { name: "PostList", data() { return { category: "" }; }, created() { this.category = this.$; }, watch: { $route() { this.category = this.$; } } }; </script>

Notice that, in the PostList component, we’re using a custom watcher to update our category data property, based on our current URL.

Now we’re ready to perform a quick test to confirm the routes are working. Spin up the Vue.js server using the command npm run serve. Open a browser at localhost:8080 and test each navigation link. The category property should output the same value we defined in route name’s attribute.

A page view of our app

The post How to Build a Vue Front End for a Headless CMS appeared first on SitePoint.

4 Signs It’s a Bad Idea to Quit Your Job

Jul 18, 2019


And what to do when you have to stay.

Edwarden, a developer and StackExchange user, wanted advice.

His boss yelled at him for requesting a promotion in the future. Or more specifically, asking for advice on the best process for receiving a promotion. Edwarden exceeded his manager's expectations. Other managers were also very happy with his performance as well.

His manager also knew what Edwarden wanted to discuss ahead of time.

It didn't matter.

His manager exploded. He began yelling, shouting and stomping his feet. "You've only been here... how many months?! Seven?! And now you're asking for a promotion!" His manager threw a tantrum and continued to interrupt him until he agreed that he had no "additional concerns."

Here's the Advice He Received from Other Users

You should quit your job.

You should look for a new job, I don't think your boss is going to promote you How many red flags do you need to see before you look for a new job? Do yourself a favor and get out This shows you why the turnover rate is high and why you should try to get out as soon as possible

Is it a bad idea for him to quit his job?

How do you know?

It's not something we're taught in school. We graduate, get a job and then we're just supposed to figure this out on our own. While some of us do, many of us don't.

That's the question.

What are the signs that indicate it's a bad idea to quit your job? First, let's take a look at some of the more common/obvious answers.

The post 4 Signs It’s a Bad Idea to Quit Your Job appeared first on SitePoint.

JavaScript Remains the Queen of Programming Languages

Jul 18, 2019


This article was originally published on Developer Economics. Thank you for supporting the partners who make SitePoint possible.

Take the Developer Economics survey and have your say on what the next programming language Queen should be. You could win amazing prizes and gear. Discover more.

Welcome to another update on programming language communities. The choice of programming language matters deeply to developers because they want to keep their skills up to date and marketable. Languages are a beloved subject of debate and the kernels of some of the strongest developer communities. They matter to toolmakers too, as they want to make sure they provide the most useful SDΚs.

Language growth

It can be hard to assess how widely used a programming language is. The indices available from players like Tiobe, Redmonk, Stack Overflow’s yearly survey, or Github’s Octoverse are great, but mostly offer only relative comparisons between languages, providing no sense of the absolute size of each community. They may also be biased geographically, or skewed towards certain fields of software development, or open source developers.

The estimates we present here look at active software developers using each programming language, across the globe and across all kinds of programmers. They are based on two pieces of data. First, our independent estimate of the global number of software developers, which we published for the first time in 2017. Second, our large-scale, low-bias surveys which reach more than 20,000 developers every six months. In the survey, we consistently ask developers about their use of programming languages across nine areas of development1, giving us rich and reliable information about who uses each language and in which context.

JavaScript is and remains the queen of programming languages. Its community of 11.7M developers is the largest of all languages. In 2018, 2.5M developers joined the community: the highest growth in absolute numbers and more than the entire population of Swift, Ruby, or Kotlin developers, amongst others. New developers see it as an attractive entry-level language, but also existing developers are adding it to their skillset. Even in software sectors where Javascript is least popular like machine learning or on-device code in IoT, over a quarter of developers use it for their projects.

Python has reached 8.2M active developers and has now surpassed Java in terms of popularity. It is the second-fastest growing language community in absolute terms with 2.2M net new Python developers in 2018. The rise of machine learning is a clear factor in its popularity. A whopping 69% of machine learning developers and data scientists now use Python (compared to 24% of them using R).

Java (7.6M active developers), C# (6.7M), and C/C++ (6.3M) are fairly close together in terms of community size and are certainly well-established languages. However, all three are now growing at a slower rate than the general developer population. While they are not exactly stagnating, they are no longer the first languages that (new) developers look to.

Java is very popular in the mobile ecosystem and its offshoots (Android), but not for IoT devices. C# is a core part of the Microsoft ecosystem. Throughout our research, we see a consistent correlation between the use of C# and the use of Microsoft developer products. It’s no surprise to see desktop and AR/VR (Hololens) as areas where C# is popular. C/C++ is a core language family for game engines and in IoT, where performance and low-level access matter (AR/VR exists on the boundary between games and IoT).

PHP is now the second most popular language for web development and the fifth most popular language overall, with 5.9M developers. Like Python, it’s growing significantly faster than the overall developer population, having added 32% more developers to its ranks in 2018. Despite having (arguably) a somewhat bad reputation, the fact that PHP is easy to learn and widely deployed still propels it forward as a major language for the modern Internet.

The fastest growing language community in percentage terms is Kotlin. It grew by 58% in 2018 from 1.1M to 1.7M developers. Since Google has made Kotlin a first-class language for Android development, we can expect this growth to continue, in a similar way to how Swift overtook Objective-C for iOS development.

Other niche languages don’t seem to be adding many developers, if any. Swift and Objective-C are important languages to the Apple community, but are stable in terms of the number of developers that use them. Ruby and Lua are not growing their communities quickly either.

Older and more popular programming languages have vocal critics, while new, exciting languages often have enthusiastic supporters. This data would suggest that it’s not easy for new languages to grow beyond their niche and become the next big thing. What does this mean for the future of these languages and others like Go or Scala? We will certainly keep tracking this evolution and plan to keep you informed.

The Developer Economics survey is now live.

Have your say on which should be the next programming language Queen and you may win amazing prizes and gear. Discover more.

The post JavaScript Remains the Queen of Programming Languages appeared first on SitePoint.

5 Top WordPress Tools and Services for You to Use in 2019

Jul 17, 2019


This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

If you're looking for the best WordPress tools and services to take your business to the next level this year, you have plenty to choose from – 54,991 WordPress plugins and services, to be exact.

That sounds like really good news until it's time to find the right match for your website-building projects. You need a game plan that defines the plugins you want that will enable your site to attract visitors.

You'll also need a game plan for conducting what could turn out to be a very lengthy and tedious search.

You can save yourself a lot of time and trouble by checking out the must-have WordPress tools and services listed below — one or more of which might be all you'll need.

1. Elementor

Elementor, with its quick and powerful drag-and-drop editor, enables you to create WordPress landing pages and sites without any need for coding. This and several of its other features combine to make Elementor the most advanced WordPress page builder on the market today.

Elementor works with any theme, and in doing so makes it possible to avoid many of the constraints and limitations some themes can impose on web designers. Elementor also works with any WordPress plugin without slowing down your site, and this tool's squeaky clean code also contributes to your site's overall performance.

You can build your site from scratch on Elementor using its large selection of widgets, or you can select among hundreds of pre-designed templates that can be inserted into any page to get off to a quick start or speed up your workflow.

If you were limited to one choice among this list, you might get the most bang for your buck by choosing Elementor. Fortunately, you're not limited, so feel free to select any or all of the remaining 5 tools and services as well.

2. Brizy WordPress Website Builder

The Brizy WordPress website builder is yet another tool that could be the solution to most, and perhaps all, of your design problems. This drag-and-drop website builder is delightfully easy to use, no coding or coding skill is needed, and you can start using it without paying a dime.

Need to have a website up-and-running quickly? Brizy's 240 design blocks, 150 layouts, and 16 popups will take you a long way, particularly if you find starting a website from scratch is a little intimidating.

As for the more detailed aspects of your website design, the Brizy package includes 4,000 free icons, a pop-up builder if you want to create a unique pop-up design, and easily changed fonts and color schemes.

Yet another cool feature is the ease with which you can control how your website will appear on tablets and mobile devices.

3. WordPress Site Care

Creating a website on WordPress can be accomplished easily and efficiently if you have the right tools on hand. Once a website is up and running you can usually maintain it without too much difficulty. That's not always the case, however, especially if you're responsible for keeping multiple websites up to date and in running order.

Let Newt Labs take care of managing and maintaining your WordPress websites, so you can spend your time doing what you enjoy doing best – building one high-performing, client-satisfying website after another.

The Newt Labs team can help you with small fixes, managing WordPress updates, providing backups, WordPress optimized cloud hosting, and more.

The WordPress site care they can provide will not only make your work life a little easier, but it will protect your reputation as well by addressing potential problems before they become real.

4. Goodie

The Goodie platform joins end-clients directly with a developer, allowing them to avoid go-betweens that can sometimes be costly, time-consuming, or a communications bottleneck when attempting to accurately transform a design into code. Goodie can completely code your website at a special price of $999.

The only thing Goodie needs from you is your design – a great solution for small businesses in need of a carefully coded website.

5. Wordlift

WordLift is the first WordPress plugin to use artificial intelligence for SEO. It grows your website's organic traffic by creating machine-friendly content that chatbots, search crawlers, and personal digital assistants like Google Assistant, Siri and Alexa use to help consumers take actions.

WordLift adds a layer of metadata and builds a knowledge graph Google uses to match the searcher's intent. Moreover, it measures content performance in terms of traffic per topic and in this way it helps editors improve their editorial plan and increase their reach.

Tips to Make Your WordPress Site Secure Choose a good hosting company

Don't be tempted to go with a cheap hosting provider. Going with one that provides multiple layers of security could save you lost or redirected data nightmares down the road.

Install a WordPress security plugin

Installing a WordPress security plug that monitors your site 24/7 and checks for site security issues is far easier than doing a periodic site security check on your own, especially if you lack development skills.

Update WordPress regularly

Whenever WordPress is updated improvements are made, bugs are eliminated, and security is improved. If you don't update regularly, some of those bugs could bite you. To update WordPress, go to the dashboard and check to see if a new version has been released. If so, click to update and click the Update Now button.

Back up your site regularly

It's all about creating a copy of all your site's data and storing it in a safe place. By doing so, you'll be able to restore the site from your backup copy should anything bad happen.


9 items of value – 5 top tools and services and 4 operation and maintenance tips. Pick one or more of the tools and services and follow through with the tips as you complete your site, and you should be in great shape.

High-performance websites you can build yourself (or with the help of a developer), and a solid security and maintenance program should combine to make your life a lot easier.

The post 5 Top WordPress Tools and Services for You to Use in 2019 appeared first on SitePoint.

The Tao of Digital Agency Profitability

Jul 16, 2019


a Taoish feature image

It’s 2019, and despite years of people espousing the death of the “digital agency”, the industry is thriving and confident. Amazing work continues to be produced by top digital agencies around the world, and clients continue to hire agencies to complement their internal teams, in particular when creative or technical innovation is required.

a Taoish feature image

And not only that, well managed digital agencies are producing world class work whilst also achieving strong profit margins. Yes, it is possible to balance the creative Yin with the operational Yang. But it requires focus and discipline. And maybe a little luck.

Our objective for this article is to distill and understand the main drivers of agency profitability, and provide tips, tactics and techniques that agency leaders can put into practice in their own businesses.

To do this, we asked the leaders of ten digital agencies from the SoDA membership for their secrets of success in the pursuit of healthy profit margins — what has worked for them, what hasn’t, and for any surprising insights and ideas they can share.

SoDA is a member-based network of 100 of the best digital agencies in the world, many of whom are globally renowned for their creative and technical innovation. Lesser known is the amazing talent in the management, finance, operations and sales teams of these agencies, who work in support of their creative endeavors.

The agency leaders we interviewed have run or are still running some of the best digital agencies on the planet. Half of those interviewed have had their digital agency be acquired, whilst the other half continue to work in and grow as independent agencies. Some run global agencies with offices around the world, others are based in one location.

A common thread with the agencies we interviewed is not just the financial discipline, but the quality of the work — with awards including the Emmys, Cannes Lions, the Webby Awards, One Show and pretty much anything else you can imagine. Financial performance and quality work are not mutually exclusive!

After we received the interview responses, we compared these against the SoDA 2018 KPI Benchmark Study, a detailed annual survey that saw 61 digital agencies provide data against hundreds of key metrics. We isolated the data for global averages and top performers and used this to provide benchmark figures, which are highlighted in grey throughout the article. The global average for EBITDA performance was around 10% while top performers delivered margins of 20% or more.

The result of the interviews and qualitative research are eight Key Drivers that we believe drive superior agency profitability:

nurture repeat customers reduce project cost overruns maximize billable utilization have keen financial discipline build a strong sales pipeline develop a capability for low cost delivery manage your ratio of billable vs non-billable staff value your people and culture

Each section below includes a description of how that Key Driver impacts profitability, how to measure the impact, and ideas for improving performance in this area.

Acknowledging that there are many flavors of digital agency, we hope that the ideas below will create some awareness of how profit can be maximized, and spark inspiration and action.

Let’s get started!

Nurture Repeat Customers

If you want to improve profitability in your agency and have time to do only one thing, do this. Almost all of the agencies who collaborated on this article recommend increasing the amount of repeat business you get vs pitching for work with new clients.

Long-term client relationships results in higher profit for the agency than individual projects, in part due to the trust that builds with the client (allowing your agency to charge appropriately for your services), and also for the reduced new business costs of pitching. Win rates when bidding for work with an existing client are typically twice as high as with new potential clients, and so the cost of pitching and losing is drastically reduced.

As Bill Fritsch, former Chariman and CEO of Digital Kitchen says:

Serious profitability comes when clients value your work and feel valued by your team. When they prefer working with your organization versus other firms, clients are more willing to work in partnership and to pay more for your service.

A good benchmark is to have over 70% of your annual revenue from existing clients (i.e. clients that were already a client last year) and the remainder from new clients you have won throughout the year.

There are both “hunting” and “farming” components to growing repeat business:

Hunting: when your agency is invited to pitch for a new piece of work, in addition to your other qualification questions ask yourself whether this is a project or an account that you are bidding for. Ideally if your pipeline is strong enough you can turn down opportunities that are project-based, and concentrate your energy on those that could have long-term potential.

Advice from Russ Whitman (founder of Ratio, acquired in 2017 and now Managing Director at Globant is that when bidding for new work, change the conversation from talking about “projects” to “partnership”. Once you have won the project, Russ says “your #1 goal is to sell the long-term rolling program”.

Image of green mountains

Arming: keeping and growing a client account requires excellence in project delivery, and so the Project Manager becomes a key role in the business. Some agencies have a compensation plan to incentivize Project Managers based on profitable project delivery as well as growth of revenue with their clients.

Naturally, the PM needs to be surrounded by the right team to guarantee quality delivery. UK digital agency Red Badger take this very seriously, and as Cain Ullah (founder and CEO) says, they —

will only take on a new project if we can resource it with a high quality team with the appropriate blend of experience and resources that have worked on previous Red Badger projects.

Swift resolution of issues is also important. At one of Bill Fritsch’s agencies —

we created a special FIRE ALERT line. If clients had hot issues, this line would be answered 24/7. And whoever was on the Fire line was rewarded for getting answers so clients didn’t have to sweat.

If you are delivering projects well, and managing issues swiftly, you should hopefully have earned permission to ask for more work from that client. As Bill suggests:

encouraging clients to do more with your firm can be as simple as training people to “always have an idea” that leads to the next thing. We developed a reward system that gave significant bonuses to client teams that turned the first project into a second opportunity before the first project was delivered. And an even bigger bonus for winning a third assignment in the same period of time.

A critical aspect of increasing client loyalty, longevity and revenue is to keep in touch with how happy they are with your agency. A client survey on a regular basis can help to benchmark client satisfaction and keep people in the agency focused on continuous improvement. The most common method is using an NPS score to track client satisfaction, along with a number of qualitative questions you could include in a (short) client survey.

This doesn’t replace regular face-to-face discussions with the client of course, whether in a formal or social setting — and this is also something you can set a goal against and measure — all of your most important client accounts should see you personally at least once every few months.

The post The Tao of Digital Agency Profitability appeared first on SitePoint.

Getting Started with October CMS Static Pages

Jul 15, 2019


These days it can be tough for website developers, especially now with WordPress going through the biggest and the most dramatic update in its history. Over the past few months, we’ve observed a growing interest in the October CMS community.

October CMS is the natural choice for developers that look for a modern and reliable content management system based on the Laravel PHP framework. Launched in 2014, today October CMS is a mature platform with a large ecosystem. October CMS is known for its reliability and non-breaking updates, which is greatly appreciated by developers and their clients. The philosophy “Getting back to basics” really matters to freelancers and digital studios, whose businesses depend on the quality of the software they use. The quickly growing Marketplace and the supporting businesses built around October are great proof of the community’s trust. In 2018 October CMS was also voted the Best Flat File CMS in the CMS Critic Award contest.

October CMS has a simple extensible core. The core provides basic CMS functions and a lot of functionality can be added with plugins from the Marketplace. The plugins Marketplace includes a lot of product categories, which include blogging, e-commerce, contact forms, and many others.

In this tutorial, we will demonstrate how to create a website with pages editable in WYSIWYG (What You See Is What You Get) mode, and blogging features. We will use the Static Pages plugin that allows you to create a dynamic website structure with pages and menus manageable by non-technical end users. At the same time, the website will include dynamic blog pages, with content managed with the Blog plugin. All plugins used in this tutorial are free for anyone to install. The ideas from this tutorial can be extended to creating more complex websites. Themes implementing similar features can be found on the Themes Marketplace, but the goal of this tutorial is to show how easy it is to make an October website from scratch.

This is what we will have at the end:

The website theme is completely based on Twitter Bootstrap 4.3. The demo website also includes a simple CSS file that manages some light styling features, such as padding and colors, but since it is not relevant for the topic of this tutorial, it won’t be described in this guide.

Installing October CMS

To install October CMS you must have a web server meeting the minimum requirements. You can use MAMP as a solution to try October on your computer.

The post Getting Started with October CMS Static Pages appeared first on SitePoint.

8 of the Best Design Handoff Tools

Jul 11, 2019


A Roundup of the Best Design Handoff Tools

Design handoff (before it was even called that) was a complicated, frustrating, and often disastrous task. Way back when, Photoshop was the only tool available for screen design, and converting a design to code was called “slicing a PSD.”

Oh, the days.

Slicing a PSD was the developer’s responsibility, which was quite frustrating because developers understandably didn’t want to work with design tools. That being said, designers had to manually write out design specs for every layer in a Photoshop document, which often resulted in inconsistencies and heated discussions with developers. This set designers and developers on a path of war that even today we’re not ready to joke about.

But eventually we were introduced to Sketch. Thanks to its extensible API, developers were able to build apps that could analyze and interpret design documents completely. Today, design handoff tools have become a must-have in every design workflow, with almost every screen design tool integrating with (or providing its own) design handoff solution.

What Do Design Handoff Tools Do?

Design handoff tools have three main objectives:

to help designers export their designs from [insert tool here] to help developers inspect and implement said design to facilitate feedback and collaboration between stakeholders

The design handoff workflow often looks like this:

The designer mocks up the design in a screen design tool. The designer sends the mockups to a design handoff tool. Stakeholders look at the design, and make comments if needed. The designer fixes any issues, then sends an updated version. The developer then inspects the finished design, layer-by-layer. Design handoff tools translate each layer into code, and the developer can then use this code as the basis for developing the app or website.

Without design handoff, developers are left with only one alternative: guessing. Guessing can result in inaccuracies — for example the wrong colors being used, or an interaction behaving incorrectly — which in turn impacts user experience.

All handoff tools work the same way, but they don’t all support the same platforms or your screen design tool of choice. If you don’t use Sketch, for example, Marvel isn’t going to be all that useful to you as a design handoff tool.

Let’s take a look at the best design handoff tools that are currently available.


Zeplin has been leading the charge on design handoff since the concept was first realized, integrating with Sketch and Photoshop, and more recently Adobe XD and Figma. Designs synced from any of these tools can be translated into CSS, Android, Swift, Objective-C or React Native code, which includes the styles of each layer and any assets that have been marked as exportable.

The Zeplin dashboard

This functionality is standard for design handoff tools, although with Zeplin being the first (or at least one of the first), the user experience of their app is almost unrivaled.

And as with all other design handoff tools, commenting features are there to aid feedback and collaboration.

Platforms: Web, macOS, Windows Pricing: Free Plan, $17, $26, or $122.40 (/month)

The post 8 of the Best Design Handoff Tools appeared first on SitePoint.

How to Plot Charts in Python with Matplotlib

Jul 10, 2019


You generate a huge amount of data on a daily basis. A critical part of data analysis is visualization. A variety of graphing tools have developed over the past few years. Given the popularity of Python as a language for data analysis, this tutorial focuses on creating graphs using a popular Python library — Matplotlib.

Matplotlib is a huge library, which can be a bit overwhelming for a beginner — even if one is fairly comfortable with Python. While it is easy to generate a plot using a few lines of code, it may be difficult to comprehend what actually goes on in the back-end of this library. This tutorial explains the core concepts of Matplotlib so that one can explore its full potential.

Let's get started!


The library that we will use in this tutorial to create graphs is Python's matplotlib. This post assumes you are using version 3.0.3. To install it, run the following pip command in the terminal.

pip install matplotlib==3.0.3

To verify the version of the library that you have installed, run the following commands in the Python interpreter.

>>> import matplotlib >>> print(matplotlib.__version__) '3.0.3'

If you are using Jupyter notebooks, you can display Matplotlib graphs inline using the following magic command.

%matplotlib inline Pyplot and Pylab: A Note

During the initial phases of its development, Mathworks' MATLAB influenced John Hunter, the creator of Matplotlib. There is one key difference between the use of commands in MATLAB and Python. In MATLAB, all functions are available at the top level. Essentially, if you imported everthing from matplotlib.pylab, functions such as plot() would be available to use.

This feature was convenient for those who were accustomed to MATLAB. In Python, though, this could potentially create a conflict with other functions.

Therefore, it is a good practice to use the pyplot source.

from matplotlib import pyplot as plt

All functions such as plot() are available within pyplot. You can use the same plot() function using plt.plot() after the import earlier.

Dissecting a Matplotlib Plot

The Matplotlib documentation describes the anatomy of a plot, which is essential in building an understanding of various features of the library.


The major parts of a Matplotlib plot are as follows:

Figure: The container of the full plot and its parts Title: The title of the plot Axes: The X and Y axis (some plots may have a third axis too!) Legend: Contains the labels of each plot

Each element of a plot can be manipulated in Matplotlib's, as we will see later.

Without further delay, let's create our first plot!

Create a Plot

Creating a plot is not a difficult task. First, import the pyplot module. Although there is no convention, it is generally imported as a shorter form &mdash plt. Use the .plot() method and provide a list of numbers to create a plot. Then, use the .show() method to display the plot.

from matplotlib import pyplot as plt plt.plot([0,1,2,3,4])

Your first plot with matplotlib

Notice that Matplotlib creates a line plot by default. The numbers provided to the .plot() method are interpreted as the y-values to create the plot. Here is the documentation of the .plot() method for you to further explore.

Now that you have successfully created your first plot, let us explore various ways to customize your plots in Matplotlib.

Customize Plot

Let us discuss the most popular customizations in your Matplotlib plot. Each of the options discussed here are methods of pyplot that you can invoke to set the parameters.

The post How to Plot Charts in Python with Matplotlib appeared first on SitePoint.

Learn to Design and Animate in 3D with Zdog

Jul 9, 2019


There's a cool JavaScript library that names like Chris Gannon, Val Head, and CodePen are all raving about. You can also find it on ProductHunt, where it's been doing rather well. The library is none other than Dave DeSandro's Zdog.

In this article, I'm going to introduce you to Zdog and show you some cute demos made by amazing devs that you can reverse-engineer and learn from.

Let's dive in!

What Is Zdog

DeSandro explains what Zdog is about on the library's dedicated website:

The post Learn to Design and Animate in 3D with Zdog appeared first on SitePoint.

How to Use Git Branches & Buddy to Organize Project Code

Jul 9, 2019


This article was created in partnership with Buddy. Thank you for supporting the partners who make SitePoint possible.

In this article, you will learn how to set up continuous integration/deployment pipelines for your branching workflow. We will be using Buddy CI/CD services to set up the pipelines. We'll use a basic JavaScript project where we'll set up a couple of development branches. I'll show you how to automate testing on each type of branch. I'll also be introducing the concept of branching workflows, and show a few examples you can adopt in your projects.


To follow along with this tutorial, you only need basic Node.js skills. You also need to be conversant with Git. Here are a couple of articles to help you out:

Git for Beginners Git for Teams Our book, Jump Start Git

In order to set up our pipelines, we will need to write a few tests using Jest. You don't need to learn Jest if you are new to it — the focus for this article is to learn how to set up pipelines that will automatically pick new branches and build them for you. Before we get to that, we should look into various branch strategies we can use.

Zero Branch Strategy


The Zero Branch Strategy is simply a fancy way of saying "you are not using any branch strategy." It's also known as a basic workflow. You only have a master branch where you directly commit and build your releases. This strategy is convenient and good if the project is:

Small and simple Hardly requires updates Managed by a solo developer

Such projects include tutorials, demos, prototypes, starter project templates and personal projects. However, there are several cons to this approach:

Multiple merge conflicts will likely occur if more than one person is working on the project You won't be able to develop multiple features and fix issues concurrently Removing and restoring features will be a difficult task Your team will spend too much time dealing with version control issues instead of working on new features

All these issues can be resolved by adopting a branch strategy. This should give you:

Ability to work independently and push changes to the shared repository without affecting your team members Ability to merge your teammates' code with your changes and quickly resolve any conflicts that may come up Assurance that code standards are maintained and collaboration efforts run smoothly regardless of the size of your team

Do note that there are many types of branch workflows you are free to pick. You can also create your own custom branch workflow that works best for you. Let's start with the simplest branch strategy.

Develop Branch Strategy


In this strategy, you set up a long-living branch called develop that runs alongside the master branch. All work is committed first to the develop branch. This is a safe place where you can introduce new code that might break your project. You'll need a testing strategy in place in order to ensure that you don't introduce bugs to the master branch when you merge the changes.

The pros of this workflow are:

The post How to Use Git Branches & Buddy to Organize Project Code appeared first on SitePoint.

Is the Rise of Millennial Women in Tech Just an Illusion?

Jul 8, 2019


woman with computer

This article was created in partnership with the Developer Economics Survey. Thank you for supporting the partners who make SitePoint possible.

The latest Developer Economics survey is upon us again, and as always, we highly recommend that everyone participates. It’s an excellent opportunity to express your views about what’s happening in the world of web development, and it helps paint a cohesive picture about the landscape.

In the last survey, published April 2019, we garnered a lot of interesting insights into the modern dev at work. Of the participants in the last survey, 9% were women, suggesting a global population of 1.7 million women developers versus the 17 million that are men. However, the report also found that under the age of 35, 36% of developers were women, versus 33% of men. Compare this with the survey’s other finding that 37% of male developers are over 35 years of age, as compared to 29% of women in the same age bracket. This indicates that younger generations of women are increasingly moving towards a career in development. Hopefully in the next few years we’ll start seeing parity between male and female developers in more senior roles. Currently men are almost three times more likely to hold senior or C-suite positions than women.

However, as the report also notes, a less optimistic reading of the data may be that women “have always been involved, but tend to leave software development as they get older, either by choice or necessity.”

The post Is the Rise of Millennial Women in Tech Just an Illusion? appeared first on SitePoint.

SitePoint Premium New Releases: Modern JavaScript, Kanban + DevTools

Jul 5, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

The Versioning Guide to Modern JavaScript

A guided tour of the breadth of modern JavaScript, including frameworks, state management, GraphQL, Node, Electron, design patterns, tools, testing and a lot more.

➤ Read The Versioning Guide to Modern JavaScript.

Browser Devtool Secrets

Browser development tools have evolved from basic consoles to fully integrated development environments. It’s become possible to alter and inspect any aspect of your web application, but few of us venture beyond the basics. In this guide, we’ll explore the features you may not have considered.

➤ Read Browser Devtool Secrets.

Practical Kanban

This book will give you practical answers to these questions: Are we using Kanban properly? How can we improve our Kanban? How can we scale our Kanban? How can our work become more predictable? How can we prioritize?

➤ Read Practical Kanban.

And More to Come…

We're releasing new content on SitePoint Premium almost every day, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Modern JavaScript, Kanban + DevTools appeared first on SitePoint.

A Beginner’s Guide to Working With Components in Vue

Jul 5, 2019


A Beginner’s Guide to Working With Components in Vue

One of the great things about working with Vue is its component-based approach to building user interfaces. This allows you to break your application into smaller, reusable pieces (components) which you can then use to build out a more complicated structure.

In this guide, I’ll offer you a high-level introduction to working with components in Vue. I’ll look at how to create components, how to pass data between components (via both props and an event bus) and how to use Vue’s <slot> element to render additional content within a component.

Each example will be accompanied by a runnable CodePen demo.

How to Create Components in Vue

Components are essentially reusable Vue instances with a name. There are various ways to create components within a Vue application. For example, in a small- to medium-sized project you can use the Vue.component method to register a global component, like so:

Vue.component('my-counter', { data() { return { count: 0 } }, template: `<div>{{ count }}</div>` }) new Vue({ el: '#app' })

The name of the component is my-counter. It can be used like so:

<div id="app"> <my-counter></my-counter> </div>

When naming your component, you can choose kebab case (my-custom-component) or Pascal case (MyCustomComponent). You can use either variation when referencing your component from within a template, but when referencing it directly in the DOM (as in the example above), only the kebab case tag name is valid.

You might also notice that, in the example above, data is a function which returns an object literal (as opposed to being an object literal itself). This is so that each instance of the component receives its own data object and doesn’t have to share one global instance with all other instances.

There are several ways to define a component template. Above we are using a template literal, but we could also use a <script tag> marked with text/x-template or an in-DOM template. You can read more about the different ways of defining templates here.

Single-file Components

In more complex projects, global components can quickly become unwieldy. In such cases, it makes sense to craft your application to use single-file components. As the name suggests, these are single files with a .vue extension, which contain a <template>, <script> and <style> section.

For our example above, an App component might look like this:

<template> <div id="app"> <my-counter></my-counter> </div> </template> <script> import myCounter from './components/myCounter.vue' export default { name: 'app', components: { myCounter } } </script> <style></style>

And a MyCounter component might look like this:

<template> <div>{{ count }}</div> </template> <script> export default { name: 'my-counter', data() { return { count: 0 } } } </script> <style></style>

As you can see, when using single-file components, it’s possible to import and use these directly within the components where they’re needed.

In this guide, I’ll present all of the examples using the Vue.component() method of registering a component.

Using single-file components generally involves a build step (for example, with Vue CLI). If this is something you’d like to find out more about, please check out “A Beginner’s Guide to Vue CLI” in this Vue series.

Passing Data to Components Via Props

Props enable us to pass data from a parent component to child component. This makes it possible for our components to be in smaller chunks to handle specific functionalities. For example, if we have a blog component we might want to display information such as the author’s details, post details (title, body and images) and comments.

We can break these into child components, so that each component handles specific data, making the component tree look like this:

<BlogPost> <AuthorDetails></AuthorDetails> <PostDetails></PostDetails> <Comments></Comments> </BlogPost>

If you’re still not convinced about the benefits of using components, take a moment to realize how useful this kind of composition can be. If you were to revisit this code in the future, it would be immediately obvious how the page is structured and where (that is, in which component) you should look for which functionality. This declarative way of composing an interface also makes it much easier for someone who isn’t familiar with a codebase to dive in and become productive quickly.

Since all the data will be passed from the parent component, it can look like this:

new Vue({ el: '#app', data() { return { author: { name: 'John Doe', email: '' } } } })

In the above component, we have the author details and post information defined. Next, we have to create the child component. Let’s call the child component author-detail. So our HTML template will look like this:

<div id="app"> <author-detail :owner="author"></author-detail> </div>

We’re passing the child component the author object as props with the name owner. It’s important to note the difference here. In the child component, owner is the name of the prop with which we receive the data from the parent component. The data we want to receive is called author, which we’ve defined in our parent component.

To have access to this data, we need to declare the props in the author-detail component:

Vue.component('author-detail', { template: ` <div> <h2>{{ }}</h2> <p>{{ }}</p> </div> ´, props: ['owner'] })

We can also enable validation when passing props, to make sure the right data is being passed. This is similar to PropTypes in React. To enable validation in the above example, change our component to look like this:

Vue.component('author-detail', { template: ` <div> <h2>{{ }}</h2> <p>{{ }}</p> </div> `, props: { owner: { type: Object, required: true } } })

If we pass the wrong prop type, you’ll see an error in your console that looks like what I have below:

"[Vue warn]: Invalid prop: type check failed for prop 'text'. Expected Boolean, got String. (found in component <>)"

There’s an official guide in the Vue docs that you can use to learn about prop validation.

See the Pen Vue Componets - Props by SitePoint (@SitePoint) on CodePen.

The post A Beginner’s Guide to Working With Components in Vue appeared first on SitePoint.

An Introduction to Cloudflare Workers

Jul 3, 2019


An Introduction to Cloudflare Workers

Cloud computing in its various incarnations — SaaS, PaaS, IaaS — has had big successes. Some of us still recall the $212 million purchase of PaaS provider Heroku in 2010, which at that time was — architecturally speaking — little more than a high-level deployment layer. It had a very posh gem for smooth and easy deployment of apps and frameworks like RoR, Python or Node apps running on Amazon's infrastructure. The concept of Serverless Computing was born.

There have been a host of different models for cloud products ever since then. Various experiments have come and gone as providers looks for the sweet spot, with proliferation continuing and new terms being born, like BaaS and MBaaS.

Protocol Labs, a crypto startup looking to redefine the cloud model, collected $257 million in its 2017 ICO, breaking all records. Airtable, with its high-level, spreadsheet-meets-database products and api, reached a $1.1 billion valuation in its 2018 financing round.

Serverless Computing

Serverless computing is a subset term of cloud computing that makes a point of doing away with the classical server product, providing developers with a high-level environment to run their code, charged on an as-used basis, and freeing those developers from worrying about the underlying software stack.

Serverless computing has allowed further flexibility in paying for used processing power, rather than paying for pre-allocated packages as with classical cloud.

The term “serverless” is semantically wrong, because the code is still executed on a server, but users conceptually don't have to deal with servers anymore. Provided certain conventions are adhered to, the underlying stack, and all the infrastructure and deployment issues, are handled by vendors.

The main type of product that sprang out from this is FaaS — a cloud execution environment, or a runtime that allows deployment of code without any boilerplate. Amazon's Lambda, Oracle Fn and Alibaba's Function Compute are some examples.


Cloudflare is a San Francisco company that was started nine years ago. It's a Content Delivery Network that provides delivery of static assets for websites from its global network of edge nodes. It also provides firewall and DDOS protection, and has a reputation for the fastest DNS service on the internet.

When talking about Cloudflare, and what it brings to the arena of serverless computing, it’s necessary to add one more term to the list of cloud buzzwords we used here — edge computing.

As explained on Wikipedia:

The post An Introduction to Cloudflare Workers appeared first on SitePoint.

Top 8 Portfolio WordPress Themes for Creatives in 2019

Jul 2, 2019


This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Creating a decent portfolio requires covering a lot of bases. Showcasing your work may actually be the easiest part; while ensuring that the text accompanying each piece gives the right level of context can be a challenge. It has to catch people's attention without shifting focus away from the work itself.

There's no shortage of themes you could use to create a decent portfolio website, but if you're not certain as to the functionality you might require, finding a satisfactory one can be difficult. Your search can be even more difficult if your goal is to create a portfolio website that's much more than "decent".

You need a theme that provides a simple way to organize your work, one that's bundled with a good combination of plugins, and one that will enable you to create something awesome you can share with the world.

There are lots of WordPress themes available, including the best-in-class themes in the list we've prepared.

1. Be Theme

Be Theme has all the features, functionalities, and design options web designers will ever need to build high-quality portfolio websites, and any other type of website for that matter. Thanks to its selection of 450+ customizable pre-built websites, its library of 200+ shortcodes plus a shortcode generator, you can create an awesome portfolio website in 4 hours or less with absolutely no need for coding.

The post Top 8 Portfolio WordPress Themes for Creatives in 2019 appeared first on SitePoint.

Talk Tech with Us in Our New Discord Community

Jul 2, 2019


Want a place to chat about coding, design, the web, and technology at large with likeminded people?

Or perhaps you work remotely and need a community that can travel with you?

SitePoint Discord serverWe're opening our Discord to the public today. We wanted to create a casual place where we could chat about cool tech and getting stuff done, without the exhibitionism and divisive atmosphere of social media, but with that real-time sense of community.

Our server is a baby server, and there's a lot of work to be done, but for now we have rooms for all sorts of conversation. Whether you want to puzzle out web development issues or talk games, we've got you covered.

Sign up to our Discord and start chatting with SitePoint staff, members, and the wider developer community!

It's 100% free. Come on in and have chat with us!

⚡️ Join us for a chat now.

The post Talk Tech with Us in Our New Discord Community appeared first on SitePoint.

The Precarious Nature of Running a Digital Publication in 2019

Jul 2, 2019


Clickbait is not your friend

This article was created in partnership with Proper Media. Thank you for supporting the partners who make SitePoint possible.

Recently, there have been countless articles and think pieces written about the decline of publishing. These articles often illustrate how difficult it is for publishers to monetize what they do to eke out a sustainable business model.

In reality, advertising revenue is down for independent publishers across the board. More people are using advanced ad blocking software, which is impacting the number of ads publishers can show. Plus there are two giant elephants in the room — Google and Facebook. These two entities combined account for almost 60% of the total advertising spend online. This duopoly uses third-party publisher content to bring in advertising revenue, but leaves very little revenue for the publishers themselves. For better or worse, they also have the scale to very effectively monetise their user base, which a smaller publisher just can’t compete with. Facebook, for example, has an average CPM of around $8 compared to an average CPM of $1 for third-party publishers. As the online advertising world evolves, publishers will face new obstacles to monetize their websites.

[caption id="attachment_172667" align="alignnone" width="470"]Top 5 companies, ranked by US net digital ad revenue share 2018 & 2019 Source: eMarketer[/caption]

To overcome these difficulties, publishers have all tried different ways to generate revenue in this new landscape; the New York Times and many others have tried to sell premium subscriptions to their users by hiding their content behind paywalls. Another example of subscription-based revenue is Apple News+, which is an attempt to distribute some subscription revenue to publishers, albeit after Apple takes its healthy cut. However, consumers are still very unlikely to pay for content online. A study by Reuters Institute and Oxford University recently found that only 13% of people in the US pay for an ongoing news solution.

Paywalls have worked effectively for some publishers like the New York Times, where they have seen year-on-year revenue growth for their premium subscription product. But this is just not feasible for a lot of others, who are seeing a backlash from users to this system. In addition to being ineffective for many publishers, paywalls are inherently at odds with a publisher’s goals. Publishers want to produce content that educates and informs the largest number of people, which is the opposite of how a paywall operates. With these constraints around subscription-based website monetization, traditional display advertising has remained a large part of overall publisher revenue, including ours.

The simple fact is advertising has always been integral to an online publisher like us—it is how we here at SitePoint keep everything running and pay for our writers to produce interesting content that you come to read. We, like all other businesses, have to generate revenue. We’ve faced the same challenges over the years and the same struggle to strike a balance between generating revenue with advertising (in order to continue to produce a plethora of free, educational content), while creating a user experience that allows our readers to digest and enjoy that content without annoyance. We admittedly do not always get this balance right.

Currently, we’re teamed up with Proper Media as our programmatic revenue partner, in hopes of finding that balance. The way it works is that we leverage Proper’s header bidding solution to monetize the ad slots on our article pages. Working with Proper allows us to get the highest CPMs by creating competition for our inventory across all the top advertising demand available (e.g. Amazon’s A9, Google Adx, and AOL’s OATH). Additionally, they take care of the direct demand for ad sales. Gone are the days when advertisers bought large advertising blocks from individual publishers — nowadays, it is all run programmatically.

In short, Proper handles the monetization so we can focus on producing great content. They ensure that our ad revenue is as high as possible. They have continuous data-driven optimization and granular real-time reporting which ensure that we are always getting the best yield for our ad units. They also handle all of the receivables and deliver a consolidated payment that is on faster terms than all major exchanges, which really helps the cash flow of a small publisher like us.

The post The Precarious Nature of Running a Digital Publication in 2019 appeared first on SitePoint.

Build a Real-time Chat App with Pusher and Vue.js

Jul 1, 2019


Build a Real-time Chat App with Pusher and Vue.js

Apps that communicate in real time are becoming more and more popular nowadays, as they make for a smoother, more natural user experience.

In this tutorial, we’re going to build a real-time chat application using Vue.js powered by ChatKit, a service provided by Pusher. The ChatKit service will provide us with a complete back end necessary for building a chat application on any device, leaving us to focus on building a front-end user interface that connects to the ChatKit service via the ChatKit client package.


This is an intermediate- to advanced-level tutorial. You’ll need to be familiar with the following concepts to follow along:

Vue.js basics Vuex fundamentals employing a CSS framework

You’ll also need Node installed on your machine. You can do this by downloading the binaries from the official website, or by using a version manager. This is probably the easiest way, as it allows you to manage multiple versions of Node on the same machine.

Finally, you’ll need to install Vue CLI globally with the following command:

npm install -g @vue/cli

At the time of writing, Node 10.14.1 and Vue CLI 3.2.1 are the latest versions.

About the Project

We’re going to build a rudimentary chat application similar to Slack or Discord. The app will do the following:

have multiple channels and rooms list room members and detect presence status detect when other users start typing

As mentioned earlier, we’re just building the front end. The ChatKit service has a back-end interface that will allows us to manage users, permissions and rooms.

You can find the complete code for this project on GitHub.

Setting up a ChatKit Instance

Let’s create our ChatKit instance, which is similar to a server instance if you’re familiar with Discord.

Go to the ChatKit page on Pusher’s website and click the Sign Up button. You’ll be prompted for an email address and password, as well as the option to sign in with GitHub or Google.

Select which option suits you best, then on the next screen fill out some details such as Name, Account type, User role etc.

Click Complete Onboarding and you’ll be taken to the main Pusher dashboard. Here, you should click the ChatKit Product.

The ChatKit dashboard

Click the Create button to create a new ChatKit Instance. I’m going to call mine VueChatTut.

Creating a new ChatKit instance

We’ll be using the free plan for this tutorial. It supports up to 1,000 unique users, which is more than sufficient for our needs. Head over to the Console tab. You’ll need to create a new user to get started. Go ahead and click the Create User button.

Creating a ChatKit user

I’m going to call mine “john” (User Identifier) and “John Wick” (Display Name), but you can name yours however you want. The next part is easy: create the two or more users. For example:

salt, Evelyn Salt hunt, Ethan Hunt

Create three or more rooms and assign users. For example:

General (john, salt, hunt) Weapons (john, salt) Combat (john, hunt)

Here’s a snapshot of what your Console interface should like.

A snapshot of the console

Next, you can go to the Rooms tab and create a message using a selected user for each room. This is for testing purposes. Then go to the Credentials tab and take note of the Instance Locator. We’ll need to activate the Test Token Provider, which is used for generating our HTTP endpoint, and take a note of that, too.

Test token

Our ChatKit back end is now ready. Let’s start building our Vue.js front end.

Scaffolding the Vue.js Project

Open your terminal and create the project as follows:

vue create vue-chatkit

Select Manually select features and answer the questions as shown below.

Questions to be answered

Make doubly sure you’ve selected Babel, Vuex and Vue Router as additional features. Next, create the following folders and files as follows:

Files and folders to create

Make sure to create all the folders and files as demonstrated. Delete any unnecessary files that don’t appear in the above illustration.

For those of you that are at home in the console, here are the commands to do all that:

mkdir src/assets/css mkdir src/store touch src/assets/css/{loading.css,loading-btn.css} touch src/components/{ChatNavBar.vue,LoginForm.vue,MessageForm.vue,MessageList.vue,RoomList.vue,UserList.vue} touch src/store/{actions.js,index.js,mutations.js} touch src/views/{ChatDashboard.vue,Login.vue} touch src/chatkit.js rm src/components/HelloWorld.vue rm src/views/{About.vue,Home.vue} rm src/store.js

When you’re finished, the contents of the src folder should look like so:

. ├── App.vue ├── assets │ ├── css │ │ ├── loading-btn.css │ │ └── loading.css │ └── logo.png ├── chatkit.js ├── components │ ├── ChatNavBar.vue │ ├── LoginForm.vue │ ├── MessageForm.vue │ ├── MessageList.vue │ ├── RoomList.vue │ └── UserList.vue ├── main.js ├── router.js ├── store │ ├── actions.js │ ├── index.js │ └── mutations.js └── views ├── ChatDashboard.vue └── Login.vue

For the loading-btn.css and the loading.css files, you can find them on the website. These files are not available in the npm repository, so you need to manually download them and place them in your project. Do make sure to read the documentation to get an idea on what they are and how to use the customizable loaders.

Next, we’re going to install the following dependencies:

@pusher/chatkit-client, a real-time client interface for the ChatKit service bootstrap-vue, a CSS framework moment, a date and time formatting utility vue-chat-scroll, which scrolls to the bottom automatically when new content is added vuex-persist, which saves Vuex state in browser’s local storage npm i @pusher/chatkit-client bootstrap-vue moment vue-chat-scroll vuex-persist

Do check out the links to learn more about what each package does, and how it can be configured.

Now, let’s configure our Vue.js project. Open src/main.js and update the code as follows:

import Vue from 'vue' import BootstrapVue from 'bootstrap-vue' import VueChatScroll from 'vue-chat-scroll' import App from './App.vue' import router from './router' import store from './store/index' import 'bootstrap/dist/css/bootstrap.css' import 'bootstrap-vue/dist/bootstrap-vue.css' import './assets/css/loading.css' import './assets/css/loading-btn.css' Vue.config.productionTip = false Vue.use(BootstrapVue) Vue.use(VueChatScroll) new Vue({ router, store, render: h => h(App) }).$mount('#app')

Update src/router.js as follows:

import Vue from 'vue' import Router from 'vue-router' import Login from './views/Login.vue' import ChatDashboard from './views/ChatDashboard.vue' Vue.use(Router) export default new Router({ mode: 'history', base: process.env.BASE_URL, routes: [ { path: '/', name: 'login', component: Login }, { path: '/chat', name: 'chat', component: ChatDashboard, } ] })

Update src/store/index.js:

import Vue from 'vue' import Vuex from 'vuex' import VuexPersistence from 'vuex-persist' import mutations from './mutations' import actions from './actions' Vue.use(Vuex) const debug = process.env.NODE_ENV !== 'production' const vuexLocal = new VuexPersistence({ storage: window.localStorage }) export default new Vuex.Store({ state: { }, mutations, actions, getters: { }, plugins: [vuexLocal.plugin], strict: debug })

The vuex-persist package ensures that our Vuex state is saved between page reloads or refreshes.

Our project should be able to compile now without errors. However, don’t run it just yet, as we need to build the user interface.

The post Build a Real-time Chat App with Pusher and Vue.js appeared first on SitePoint.

SitePoint Premium New Releases: Dev Tools, C# & Kubernetes

Jun 28, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Developer Essentials: Tools

In this short collection, we round up some of the best developer tools available, and provide some tips on how to improve your workflow with Gulp, and how to write better JavaScript.

➤ Read Developer Essentials: Tools.

Beginning C# 7 Programming with Visual Studio 2017

In this book, top ethical hackers discuss advanced persistent threats, public key encryption, firewalls, hacking cars, tools and techniques, social engineering, cryptography, penetration testing, network attacks, advice for parents of young hackers, the Code of Ethical Hacking, and much more.

➤ Read Beginning C# 7 Programming with Visual Studio 2017.

Kubernetes, Microservices and DevOps

A guided tour of container orchestration with Kubernetes. This Versioning Guide provides a guided reading list, curated by Versioning maestro Adam Roberts. It will cover installation, objects, cluster interaction, deployment and much more.

➤ Read Kubernetes, Microservices and DevOps.

And More to Come…

We're releasing new content on SitePoint Premium almost every day, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Dev Tools, C# & Kubernetes appeared first on SitePoint.

The 9 Best Mind Mapping Tools for Designers

Jun 27, 2019


A Roundup of the Best Mind Mapping Tools

A mind map is a diagram drawn to help brainstorm ideas without being forced to organize or structure them. Instead, ideas are visually depicted in a hierarchical structure showing the flow and relationship between various ideas as they arise, which allows us to analyze them and recall them with ease.

Let’s take a look at the types of mind maps that are used in UX, and the mind mapping tools that are used to create them.

The Benefits of Mind Mapping

So, why mind map?

Ideation is exciting. Ideas here, ideas there. However, the enthusiasm to conceptualize them can be so invigorating that we don’t take the time to develop upon them or consider other ideas.

Rushing into a concept can take us down a road that’s seemingly harmless but actually quite dangerous. It might get us to finish line, which feels great, but with an end-result that’s mashup of random thoughts and ideas that don’t work together.

By mapping out ideas, we can better understand their:

value (what’s to be gained by exploring this idea?) role (how does the idea fit into the bigger picture?) relationships (how does the idea relate to other ideas?)

Eventually we can start to organize these ideas by similarity using a methodology called affinity mapping, which in turn helps us design user-centric mockups and wireframes as opposed to being driven by personal opinion.

How Mind Maps Are Used in UX

Mind maps can evolve into other types of maps with more specific uses. For example, maps that explore the navigational hierarchy and user flows of software systems such as websites, apps, and so on, are referred to as sitemaps. Maps that explore the numerous ways in which customers might interact with a product are called customer journey maps.

Both of these are useful when planning UX design projects — sitemaps for planning wireframes, and customer journey maps for optimizing the online and offline customer experience.

Coggle Pricing: $0, $5, or $8 (/month) Platform: web Pros: simple features, real-time collaboration Cons: looks a tad outdated, only accomplishes mind mapping

While many old-school mind mapping tools have become tragically outdated over the years, Coggle is one that’s managed to survive by keeping its focus solely on mind mapping and sporting a fairly simple user interface.

a Coggle mind map

It still looks somewhat dated, but nonetheless is much better looking than veteran tools like FreeMind (which hasn’t been updated in at least four years) and Mind Manager (which looks like it came bundled with Windows 95).

Check out the Coggle Mind Map Gallery, especially the mind map that describes the various types of emotion.

Notice how each emotion is divided by color, then further divided into more specific emotions depicted with capital letters, and then divided once more in a smaller font. It’s totally up to you how visually organize your thoughts and relationships. Coggle lets us explore ideas using images, branches, loops, shapes, and whatever else we need to explore our story.

Stakeholders can weigh in by commenting on mind maps, but also collaborate in real time as if using a whiteboard.

Apart from real-time collaboration, these features are standard and are included in every other tool in this list.

TL;DR: Coggle is everything you need, and nothing you don’t.

XMind Pricing: $1.24, $4.58, or $4.99 (/month) Platform: web, iOS, Android, macOS, Linux, Windows Pros: beautiful maps, very modern, excellent UX Cons: only mind mapping, no real-time collaboration

Other than Coggle, XMind is the only mind mapping tool to withstand the test of time, these days taking inspiration from critically acclaimed screen design tools like Sketch to offer a mighty mind mapping experience, but still with a minimalist and intuitive user interface. XMind mind maps look stunning, and while there’s no real-time collaboration, the maps can be shared with stakeholders and exported in a variety of formats.

An XMind map

If you’re looking for a modern-looking mind mapping tool without the bells and whistles, XMind is more than suitable.

Tip: try “Zen Mode” to remove all UI distractions!

The post The 9 Best Mind Mapping Tools for Designers appeared first on SitePoint.

How to Get Started with Vuetify

Jun 26, 2019


In this article, you will learn how you can quickly build an attractive and interactive frontend very quickly using Vuetify. Building a friendly application interface with a great user experience is a skill that requires practice and knowledge. While Vuetify won't make you a skilled UX practitioner over night, it will help provide a solid start to those who are new in this area.

As a Vue.js developer, there are many fully-featured CSS frameworks specifically developed for Vue that you can take advantage of. One great example is Bootstrap-Vue. I have used it and and it does really make building components easier than just using traditional CSS frameworks. However, you may want to give your apps a Material Design look and feel to make it familiar to new users.

According to the makers of Material Design:

"Material Design isn't a single style. It's an adaptable design system inspired by paper and ink. And engineered so you can build beautiful, usable products faster."

I hope I now have your attention with that powerful statement. Currently, Vuetify is the most complete user interface component library for Vue applications that follows the Google Material Design specs. Let's quickly dive in and look at how you can get started.


This guide is written for developers who have intermediate or advanced knowledge of Vue.js. If you have never used Vue.js to build applications, please check out these articles:

Jump Start Vue, our complete introduction to Vue.js Getting Started with Vue.js — a quick primer Getting up and Running with the Vue.js 2.0 Framework More Vue.js Articles What is Vuetify?

Vuetify is an open source MIT project for building user interfaces for web and mobile applications. It is a project that is backed by sponsors and volunteers from the Vue community. The project is supported by a vibrant Discord community forum where you can ask JavaScript questions — even if they're not about Vuetify. The development team is committed to fixing bugs and providing enhancements through consistent update cycles. There are also weekly patches to fix issues that the community raises.

Most open-source frontend libraries don't get this level of attention. So you can be confident that when you start using Vuetify in your projects, you won't be left hanging without support in the future. Vuetify supports all major browsers out of the box. Older browsers such as IE11 and Safari 9 can work too but will require babel-polyfill. Anything older than that is not supported. Vuetify is built to be semantic. This means that every component and prop name you learn will be easy to remember and re-use without frequently checking the documentation.

Vuetify also comes with free/premium themes and pre-made layouts you can use to quickly theme your application. At the time of writing, Vuetify v1.5.13 is the current version, which utilizes Material Design Spec v1. Version 2.x of Vuetify will utilize Material Design Spec v2 which will soon be made available. Let's go over to the next section to see a couple of ways we can install Vuetify into our projects.

Installing Vuetify

If you already have an existing Vue project that was created with an older version of Vue CLI tool or some other way, you can simply install Vuetify as follows:

The post How to Get Started with Vuetify appeared first on SitePoint.

A Beginner’s Guide to Vue CLI

Jun 25, 2019


A Beginner’s Guide to Vue CLI

When building a new Vue app, the best way to get up and running quickly is to use Vue CLI. This is a command-line utility that allows you to choose from a range of build tools, which it will then install and configure for you. It will also scaffold out your project, providing you with a pre-configured starting point that you can build on, rather than starting everything from scratch.

The most recent version of Vue CLI is version 3. It provides a new experience for Vue developers and helps them start developing Vue apps without dealing with the complex configuration of tools like webpack. At the same time, it can be configured and extended with plugins for advanced use cases.

Vue CLI v3 is a complete system for rapid Vue.js development and prototyping. It’s composed of different components, such as the CLI service, CLI plugins and recently a web UI that allows developers to perform tasks via an easy-to-use interface.

Throughout this article, I’ll introduce the latest version of Vue CLI and its new features. I’ll demonstrate how to install the latest version of Vue CLI and how to create, serve and build an example project.

Vue CLI v3 Installation and Requirements

In this section, we’ll look at the requirements needed for Vue CLI v3 and how to install it.


Let’s start with the requirements. Vue CLI v3 requires Node.js 8.9+, but v8.11.0+ is recommended.

You can install the latest version of Node.js in various ways:

By downloading the binaries for your system from the official website. By using the official package manager for your system. Using a version manager. This is probably the easiest way, as it allows you to manage multiple versions of Node on the same machine. If you’d like to find out more about this approach, please see our quick tip Installing Multiple Versions of Node.js Using nvm.

Vue creator, Evan You, described version 3 of the CLI as a “completely different beast” from its predecessor. As such, it’s important to uninstall any previous version of the CLI (that is, 2.x.x) before preceding with this tutorial.

If the vue-cli package is installed globally on your system, you can remove it by running the following command:

npm uninstall vue-cli -g Installing Vue CLI v3

You can now install Vue CLI v3 by simply running the following command from your terminal:

npm install -g @vue/cli

Note: if you find yourself needing to add sudo before your command in macOS or Debian-based systems, or to use an administrator CMD prompt in Windows in order to install packages globally, then you should fix your permissions. The npm site has a guide on how to do this, or just use a version manager and you avoid the problem completely.

After successfully installing the CLI, you’ll be able to access the vue executable in your terminal.

For example, you can list all the available commands by executing the vue command:


You can check the version you have installed by running:

vue --version $ 3.2.1 Creating a Vue Project

After installing Vue CLI, let’s now look at how we can use it to quickly scaffold complete Vue projects with a modern front-end toolset.

Using Vue CLI, you can create or generate a new Vue app by running the following command in your terminal:

vue create example-vue-project

Tip: example-vue-project is the name of the project. You can obviously choose any valid name for your project.

The CLI will prompt you for the preset you want to use for your project. One option is to select the default preset which installs two plugins: Babel for transpiling modern JavaScript, and ESLint for ensuring code quality. Or you can manually select the features needed for your project from a set of official plugins. These include:

Babel TypeScript Progressive Web App support Vue Router Vuex (Vue’s official state management library) CSS Pre-processors (PostCSS, CSS modules, Sass, Less & Stylus) Linter/ Formatter using ESLint and Prettier Unit Testing using Mocha or Jest E2E Testing using Cypress or Nightwatch

Whatever you choose, the CLI will download the appropriate libraries and configure the project to use them. And if you choose to manually select features, at the end of the prompts you’ll also have the option to save your selections as a preset so that you can reuse it in future projects.

Now let’s look at the other scripts for serving the project (using a webpack development server and hot module reloading) and building the project for production.

Navigate inside your project’s folder:

cd example-vue-project

Next, run the following command to serve your project locally:

npm run serve

The command will allow you to run a local development server from the http://localhost:8080 address. If you use your web browser to navigate to this address, you should see the following page:

Welcome to Your Vue.js App

The development server supports features like hot code reloading, which means you don’t need to stop and start your server every time you make any changes to your project’s source code. It will even preserve the state of your app!

And when you’ve finished developing your project, you can use the following command to build a production bundle:

npm run build

This will output everything to a dist folder within your project. You can read more about deployment here.

What is the Vue CLI Service?

The Vue CLI Service is a run-time dependency (@vue/cli-service) that abstracts webpack and provides default configurations. It can be upgraded, configured and extended with plugins.

It provides multiple scripts for working with Vue projects, such as the serve, build and inspect scripts.

We’ve seen the serve and build scripts in action already. The inspect script allows you to inspect the webpack config in a project with vue-cli-service. Try it out:

vue inspect

As you can see, that produces a lot of output. Later on we’ll see how to tweak the webpack config in a Vue CLI project.

The Project Anatomy

A Vue project generated with the CLI has a predefined structure that adheres to best practices. If you choose to install any extra plugins (such as the Vue router), the CLI will also create the files necessary to use and configure these libraries.

Let’s take a look at the important files and folders in a Vue project when using the default preset.

public. This folder contains public files like index.html and favicon.ico. Any static assets placed here will simply be copied and not go through webpack. src. This folder contains the source files for your project. Most work will be done here. src/assets. This folder contains the project’s assets such as logo.png. src/components. This folder contains the Vue components. src/App.vue. This is the main Vue component of the project. src/main.js. This is the main project file which bootstraps the Vue application. babel.config.js. This is a configuration file for Babel. package.json. This file contains a list of the project’s dependencies, as well as the configuration options for ESLint, PostCSS and supported browsers. node_modules. This folder contains the installed npm packages.

This is a screenshot of the project’s anatomy:

Project anatomy

Vue CLI Plugins

Vue CLI v3 is designed with a plugin architecture in mind. In this section, we’ll look at what plugins are and how to install them in your projects. We’ll also look at some popular plugins that can help add advanced features by automatically installing the required libraries and making various settings—all of which would otherwise have to be done manually.

What a Vue Plugin Is

CLI Plugins are just npm packages that provide additional features to your Vue project. The vue-cli-service binary automatically resolves and loads all plugins listed in the package.json file.

The base configuration for a Vue CLI 3 project is webpack and Babel. All the other features can be added via plugins.

There are official plugins provided by the Vue team and community plugins developed by the community. Official plugin names start with @vue/cli-plugin-, and community plugin names start with vue-cli-plugin-.

Official Vue CLI 3 plugins include:

Typescript PWA Vuex Vue Router ESLint Unit testing etc. How to Add a Vue Plugin

Plugins are either automatically installed when creating the project or explicitly installed later by the developer.

You can install many built-in plugins in a project when initializing your project, and install any other additional plugins in the project using the vue add my-plugin command at any point of your project.

You can also install plugins with presets, and group your favorite plugins as reusable presets that you can use later as the base for other projects.

Some Useful Vue Plugins

There are many Vue CLI plugins that you might find useful for your next projects. For example, the Vuetify UI library is available as a plugin, as is Storybook. You can also use the Electron Builder plugin to quickly scaffold out a Vue project based on Electron.

I’ve also written a couple of plugins which you can make use of:

vue-cli-plugin-nuxt: a Vue CLI plugin for quickly creating a universal Vue application with Nuxt.js vue-cli-plugin-bootstrap: a Vue CLI plugin for adding Bootstrap 4 to your project

If you’d like to find out more about plugins, check out this great article on Vue Mastery: 5 Vue CLI 3 plugins for your Vue project.

The post A Beginner’s Guide to Vue CLI appeared first on SitePoint.

30+ Web Tools and Services to Help You Launch Your Next Big Thing

Jun 25, 2019


Do something great neon sign

This article was created in partnership with Mekanism. Thank you for supporting the partners who make SitePoint possible.

2019 is the best year to become successful, to launch your own online or offline business, to invent a product or service, or to grow your business into a huge corporation. Because to sketch, test, build and launch that business that will become the next Uber, Instagram, or Waze is now easier than ever before.

The difference between now and previous years, is that there are now a plethora of web tools and services to help you launch your next big thing - some of them are even free! Today anybody can build a website or logo without any specialist knowledge or previous experience. With only a few hours investment, you can get amazing results. It's a quick and affordable way to get your site or product to market.

In this article we are going to review 36 different web tools and services that are recommended by successful people. Each of them will save you time and money, or help improve your business and workflows, so you can get on with launching and scaling.

1. Creative-TIM - Premium Bootstrap Themes and Templates

Creative Tim

Creative Tim is the perfect place where web designers and web developers can find fully coded UI tools to help you build web and mobile apps. With over 750.000 users, Creative Tim offers UI Kits, Dashboards and Design Systems.

All the development is made on top of Bootstrap 4: Vuejs, Angular, React, React Native. Using these tools will save developers and designers hours of work since the products already contain a large number of components and are packed with all the plugins that you might need on a project. Everything used to create the products can be downloaded for free under the MIT License.

For people with many upcoming projects, Creative Tim offers 6 Bundles at special prices, to encourage developers to save precious time and to trust the quality of their projects.
Last but not least, Creative Tim’s products are used not only by thousands of freelancers and developers but by top companies like NASA, Cisco, IBM, and Amazon.

Check out their website and find the product that matches your needs.
Pricing: Free to $249

2. Brizy - Innovative Site Builder


Brizy is the most user-friendly visual page builder in town! No designer or developer skills required. The only tools you'll need to master are clicks and drags.

Brizy can be used two ways. One is to download the WordPress plugin and use it as such, and the next one is the Cloud platform where you can create landing pages in minutes. From hosting to domain setups Brizy handles everything. Brizy Cloud is included with any Brizy PRO plan.

Creating a powerful, fully functional website is extremely easy with Brizy and anybody can do it without having any designer’s skills or writing a single line of code. This website builder has the most powerful features included, both for the free and paid plans. The free account will bring you premium features that you have to pay for on other website builders. At Brizy, these features are free.

Build a free website with Brizy today, the process is very fast and intuitive.

3. Tailor Brands

Tailor Brands

Tailor Brands is a revolutionary online logo and branding platform that will help you design your logo in seconds. It has over 10 million users and counting, and it was used to create over 400 million designs. Every second 1 a new design is made via it.

This AI-powered online logo maker platform does not use pre-made logo templates. Every design is uniquely crafted to match your business and brand personality perfectly. You don’t need to have any design skills or special knowledge, it is super simple to use and extremely fast.

Write down the logo name you want, make a few selections from the options provided by Tailor Brands and you will get a number of designs to choose from.

4. 48HoursLogo – Affordable Logos Done Fast


48hourslogo is a fast, easy and very affordable logo crowdsourcing website that has created over 3 million logos. With contest prizes starting at just $99, more than 40,000 small businesses and entrepreneurs have used this amazing logo design service to get gorgeous and creative designs.

After launching your logo design contest at 48hourslogo, your project will go through 3 stages before arriving at your final design. The qualifying stage: the contest is open to all registered designers and they will submit multiple logo concepts for you to choose from. The design revision stage: at the end of qualifying stage, you will be prompted to select up to 3 finalist designers to enter the “design revision stage”. And at the end, the finalizing stage: after selecting your contest winner, you will work with your winning designer on finalizing your design, (you can still request small changes and tweaks to your winning logo).

Start a logo design contest using 48hourslogo.

5. Codester

Codester is a huge marketplace where web designers and web developers will find tons of premium PHP scripts, app templates, themes, plugins and much more.

Always check the Flash Sales section where hugely discounted items are being sold.

Browse Codester and pick the items you need.

6. NameQL


NameQL helps you find a great name. It considers thousands of potential names in milliseconds and shows you the best ones that are still available for purchase as [name].com. It'sa huge time saver whenever you are looking for a new website domain name.

7. SeekVisa


Australia is a great destination to live and work, with developers, software engineers and user experience/user interface designer in high demand. If you're considering immigrating to Australia, you can discuss with SeekVisa, who are migration experts.

Australia's Employer Nomination Scheme (ENS) enables Australian employers to sponsor highly skilled workers to live and work permanently in Australia. This is the quickest way for IT developers to immigrate to Australia. Contact Seekvisa to determine your eligibility.

8. MobiLoud


Publishers are seeing up to 90% of their traffic coming from mobile. Mobile apps give readers the experience they want and let publishers increase engagement, traffic, and revenue.

With fast loading times, your app encourages loyalty and repeat visits. With push notifications, it brings people back again and again. Your icon is a constant reminder of your brand and content.

MobiLoud is the best solution for news mobile apps built on WordPress. They will publish and maintain your custom app, with push notifications, advertising and subscriptions, all at a fraction of the time and cost of traditional app development.

The post 30+ Web Tools and Services to Help You Launch Your Next Big Thing appeared first on SitePoint.

Code Challenge #2: 4 Tips for Higher Scores in

Jun 24, 2019


Our CSSBattle Code Challenge requires some 'outside the square' CSS thinking. Here are four pro tips to get you started on the right track.

The post Code Challenge #2: 4 Tips for Higher Scores in appeared first on SitePoint.

How to Set Up a Mobile Development Environment

Jun 21, 2019


The use of mobile devices has increased considerably in the past decade. It has been over two years since mobile browsing took over desktop. The usability of mobile devices has exploded, too. Mobile devices now come with huge processing power.

We often dismiss mobile platforms as serious workhorses for developers, but today, it's possible to take advantage of mobile portability with a level of flexibility that gets closer to the desktop every year.

This post explains the process of running a Linux development environment from your mobile device using Samsung Dex.

A Brief History of Samsung Dex

Samsung Dex is a platform that allows you to use the computing power of your mobile device to run a desktop-like environment. It was introduced in 2017 and has been actively developed since. The number of devices that can run Dex has increased steadily. In this post, we explore how to set up a Linux development environment through Samsung Dex.

The post How to Set Up a Mobile Development Environment appeared first on SitePoint.

SitePoint Premium New Releases: Cybersecurity & DevOps Adoption

Jun 21, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

The DevOps Adoption Playbook

This award-winning book provides actionable, real-world guidance on implementing DevOps in large-scale enterprise IT environments, explaining how to achieve high-value innovation and optimization with low cost and risk, and exceed traditional business goals with higher product release efficiency.

➤ Read The DevOps Adoption Playbook.

Hacking the Hacker

In this book, top ethical hackers discuss advanced persistent threats, public key encryption, firewalls, hacking cars, tools and techniques, social engineering, cryptography, penetration testing, network attacks, advice for parents of young hackers, the Code of Ethical Hacking, and much more.

➤ Read Hacking the Hacker.

And More to Come…

We're releasing new content on SitePoint Premium almost every day, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Cybersecurity & DevOps Adoption appeared first on SitePoint.

7 Worst UX Mistakes Limiting Your Growth

Jun 19, 2019


Growth. Growth! GROWTH!

Growth is often the top focus for businesses that are “onto something.” They’ve found what makes customers tick, their special recipe, and now they’re ready for the world to see.

However, scaling doesn’t only scale success.

If there’s friction in the UX, bugs, technical limitations, or any other types of UX flaws, these flaws are magnified as a product scales, which is why the most successful businesses are the ones that take their time and try not to grow too rapidly.

It’s why software teams build for one platform at a time, and it’s why MVPs and betas are only available to a subset of users.

Let’s take a look at some of the worst UX mistakes we’ll really, really want to avoid while trying to “scale up” our businesses.

1. Time Wasting

The majority of design decisions will have only a small impact. Sure, collectively, these decisions may amount to improved UX, but only one in a few will have a detrimental effect on growth.

Also, UX design is not a task. UX design is a continuous effort, and attempting to solve everything all at once can result in stress, anxiety, OCD, and eventually severe burnout.

Perfectionism is a serious growth-stopper.

The fact is, some design changes will skyrocket conversions whereas others will be much less effective, but it is really easy to obsess over these tiny details. A fantastic way to approach this is to tackle design in short, focused bursts using well-known design methodologies such as the design sprint. Design sprints help to identify problems, reframe them as opportunities, and then decide which of the problems might yield the best results, if solved.

In short, don’t waste too much time on the small things by focusing on the high-growth opportunities first. This ensures that we’re tackling the bigger problems while not creating too many bugs and flaws, as too many can be a serious hindrance.

2. Focusing on Pixels

Performance, meaning, how fast the app or website feels and loads, is a vital aspect of the user experience. While this is a task typically assigned to developers, designers should remember that it’s they who’ll design what’s to be implemented, which is why we’d recommend working design handoff tools into the workflow.

The post 7 Worst UX Mistakes Limiting Your Growth appeared first on SitePoint.

Getting Started with Vuex: a Beginner’s Guide

Jun 18, 2019


Getting Started with Vuex: a Beginner’s Guide

In single-page applications, the concept of state relates to any piece of data that can change. An example of state could be the details of a logged-in user, or data fetched from an API.

Handling state in single-page apps can be a tricky process. As an application gets larger and more complex, you start to encounter situations where a given piece of state needs to be used in multiple components, or you find yourself passing state through components that don’t need it, just to get it to where it needs to be. This is also known as “prop drilling”, and can lead to some unwieldy code.

Vuex is the official state management solution for Vue. It works by having a central store for shared state, and providing methods to allow any component in your application to access that state. In essence, Vuex ensures your views remain consistent with your application data, regardless of which function triggers a change to your application data.

In this article, I’ll offer you a high-level overview of Vuex and demonstrate how to implement it into a simple app.

A Shopping Cart Example

Let’s consider a real-world example to demonstrate the problem that Vuex solves.

When you go to a shopping site, you’ll usually have a list of products. Each product has an Add to Cart button and sometimes an Items Remaining label indicating the current stock or the maximum number of items you can order for the specified product. Each time a product is purchased, the current stock of that product is reduced. When this happens, the Items Remaining label should update with the correct figure. When the product’s stock level reaches 0, the label should read Out of Stock. In addition, the Add to Cart button should be disabled or hidden to ensure customers can’t order products that are currently not in inventory.

Now ask yourself how you’d implement this logic. It may be trickier than you think. And let me throw in a curve ball. You’ll need another function for updating stock records when new stock comes in. When the depleted product’s stock is updated, both the Items Remaining label and the Add to Cart button should be updated instantly to reflect the new state of the stock.

Depending on your programming prowess, your solution may start to look a bit like spaghetti. Now, let’s imagine your boss tells you to develop an API that allows third-party sites to sell the products directly from the warehouse. The API needs to ensure that the main shopping website remains in sync with the products’ stock levels. At this point you feel like pulling your hair out and demanding why you weren’t told to implement this earlier. You feel like all your hard work has gone to waste, as you’ll need to completely rework your code to cope with this new requirement.

This is where a state management pattern library can save you from such headaches. It will help you organize the code that handles your front-end data in a way that makes adding new requirements a breeze.


Before we start, I’ll assume that you:

have a basic knowledge of Vue.js are familiar with ES6 and ES7 language features

You’ll also need to have a recent version of Node.js that’s not older than version 6.0. At the time of writing, Node.js v10.13.0 (LTS) and npm version 6.4.1 are the most recent. If you don’t have a suitable version of Node installed on your system already, I recommend using a version manager.

Finally, you should have the most recent version of the Vue CLI installed:

npm install -g @vue/cli Build a Counter Using Local State

In this section, we’re going to build a simple counter that keeps track of its state locally. Once we’re done, I’ll go over the fundamental concepts of Vuex, before looking at how to rewrite the counter app to use Vue’s official state management solution.

Getting Set Up

Let’s generate a new project using the CLI:

vue create vuex-counter

A wizard will open up to guide you through the project creation. Select Manually select features and ensure that you choose to install Vuex.

Next, change into the new directory and in the src/components folder, rename HelloWorld.vue to Counter.vue:

cd vuex-counter mv src/components/HelloWorld.vue src/components/Counter.vue

Finally, open up src/App.vue and replace the existing code with the following:

<template> <div id="app"> <h1>Vuex Counter</h1> <Counter/> </div> </template> <script> import Counter from './components/Counter.vue' export default { name: 'app', components: { Counter } } </script>

You can leave the styles as they are.

Creating the Counter

Let’s start off by initializing a count and outputting it to the page. We’ll also inform the user whether the count is currently even or odd. Open up src/components/Counter.vue and replace the code with the following:

<template> <div> <p>Clicked {{ count }} times! Count is {{ parity }}.</p> </div> </template> <script> export default { name: 'Counter', data: function() { return { count: 0 }; }, computed: { parity: function() { return this.count % 2 === 0 ? 'even' : 'odd'; } } } </script>

As you can see, we have one state variable called count and a computed function called parity which returns the string even or odd depending on the whether count is an odd or even number.

To see what we’ve got so far, start the app from within the root folder by running npm run serve and navigate to http://localhost:8080.

Feel free to change the value of the counter to show that the correct output for both counter and parity is displayed. When you’re satisfied, make sure to reset it back to 0 before we proceed to the next step.

Incrementing and Decrementing

Right after the computed property in the <script> section of Counter.vue, add this code:

methods: { increment: function () { this.count++; }, decrement: function () { this.count--; }, incrementIfOdd: function () { if (this.parity === 'odd') { this.increment(); } }, incrementAsync: function () { setTimeout(() => { this.increment() }, 1000) } }

The first two functions, increment and decrement, are hopefully self-explanatory. The incrementIfOdd function only executes if the value of count is an odd number, whereas incrementAsync is an asynchronous function that performs an increment after one second.

In order to access these new methods from the template, we’ll need to define some buttons. Insert the following after the template code which outputs the count and parity:

<button @click="increment" variant="success">Increment</button> <button @click="decrement" variant="danger">Decrement</button> <button @click="incrementIfOdd" variant="info">Increment if Odd</button> <button @click="incrementAsync" variant="warning">Increment Async</button>

After you’ve saved, the browser should refresh automatically. Click all of the buttons to ensure everything is working as expected. This is what you should have ended up with:

See the Pen Vue Counter Using Local State by SitePoint (@SitePoint) on CodePen.

The counter example is now complete. Let’s move and examine the fundamentals of Vuex, before looking at how we would rewrite the counter to implement them.

How Vuex Works

Before we go over the practical implementation, it’s best that we acquire a basic grasp of how Vuex code is organized. If you’re familiar with similar frameworks such as Redux, you shouldn’t find anything too surprising here. If you haven’t dealt with any Flux-based state management frameworks before, please pay close attention.

The Vuex Store

The store provides a centralized repository for shared state in Vue apps. This is what it looks like in its most basic form:

// src/store/index.js import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export default new Vuex.Store({ state: { // put variables and collections here }, mutations: { // put sychronous functions for changing state e.g. add, edit, delete }, actions: { // put asynchronous functions that can call one or more mutation functions } })

After defining your store, you need to inject it into your Vue.js application like this:

// src/main.js import store from './store' new Vue({ store, render: h => h(App) }).$mount('#app')

This will make the injected store instance available to every component in our application as this.$store.

Working with State

Also referred to as the single state tree, this is simply an object that contains all front-end application data. Vuex, just like Redux, operates using a single store. Application data is organized in a tree-like structure. Its construction is quite simple. Here’s an example:

state: { products: [], count: 5, loggedInUser: { name: 'John', role: 'Admin' } }

Here we have products that we’ve initialized with an empty array, and count, which is initialized with the value 5. We also have loggedInUser, which is a JavaScript object literal containing multiple fields. State properties can contain any valid datatype from Booleans, to arrays, to other objects.

There are multiple ways to display state in our views. We can reference the store directly in our templates using $store:

<template> <p>{{ $store.state.count }}</p> </template>

Or we can return some store state from within a computed property:

<template> <p>{{ count }}</p> </template> <script> export default { computed: { count() { return this.$store.state.count; } } } </script>

Since Vuex stores are reactive, whenever the value of $store.state.count changes, the view will change as well. All this happens behind the scenes, making your code look simple and cleaner.

The mapState Helper

Now, suppose you have multiple states you want to display in your views. Declaring a long list of computed properties can get verbose, so Vuex provides a mapState helper. This can be used to generate multiple computed properties easily. Here’s an example:

<template> <div> <p>Welcome, {{ }}.</p> <p>Count is {{ count }}.</p> </div> </template> <script> import { mapState } from 'vuex'; export default { computed: mapState({ count: state => state.count, loggedInUser: state => state.loggedInUser }) } </script>

Here’s an even simpler alternative where we can pass an array of strings to the mapState helper function:

export default { computed: mapState([ 'count', 'loggedInUser' ]) }

This version of the code and the one above it do exactly the same thing. You should note that mapState returns an object. If you want to use it with other computed properties, you can use the spread operator. Here’s how:

computed: { ...mapState([ 'count', 'loggedInUser' ]), parity: function() { return this.count % 2 === 0 ? 'even' : 'odd' } }

The post Getting Started with Vuex: a Beginner’s Guide appeared first on SitePoint.

SitePoint Premium New Releases: SSGs, Interaction Design, Node & Vue

Jun 14, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

An Introduction to Hexo

In this guide we'll present Hexo, an open-source static site generator suitable for building blogs and documentation websites. We'll cover installation, working with layouts, generating posts and providing content, customizing and installing third-party themes, and deploying to Heroku.

➤ Read An Introduction to Hexo.

About Face

This essential interaction design guide examines mobile apps, touch interfaces and screen size considerations, examining goal-directed design methodology, product design methods, design for mobile platforms and consumer electronics, contemporary interfaces, interface recommendations, and much more.

➤ Read About Face.

Drupal 8 Development Cookbook Second Edition

Discover the enhanced content authoring experience that comes with Drupal 8 and how to customize it. Take advantage of multilingual tools for providing an internationalized website. Learn how to deploy from development, staging, and production with Drupal's config management system.

➤ Read Drupal 8 Development Cookbook Second Edition.

A Beginner’s Guide to Creating a Static Website with Hugo

This tutorial describes how to use Hugo, a static site generator (SSG) written in Go. Hugo boasts rich features, is very quick thanks to Go, and has lots of third-party themes, an active community, and detailed documentation.

➤ Read A Beginner’s Guide to Creating a Static Website with Hugo.

RESTful Web API Design with Node.js 10 - Third Edition

Design and implement scalable and maintainable RESTful solutions with Node.js 10 from scratch. Explore the new features of Node.js 10, Express 4.0, and MongoDB. Integrate MongoDB in your Node.js application to store and secure your data.

➤ Read RESTful Web API Design with Node.js 10 - Third Edition.

Learn Vue.js: The Collection

For those of you looking for a comprehensive guide on Vue.js, we've made our collection available as a Kindle book on Amazon. It's a great companion to Jump Start Vue.js!

Since its release in 2014, Vue.js has seen a meteoric rise to popularity and is is now considered one of the primary front-end frameworks, and not without good reason. Its component-based architecture was designed to be flexible and easy to adopt, making it just as easy to integrate into projects and use alongside non-Vue code as it is to build complex client-side applications.

➤ Buy Learn Vue.js: The Collection.

And More to Come…

We're releasing new content on SitePoint Premium almost every day, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: SSGs, Interaction Design, Node & Vue appeared first on SitePoint.

A Deep Dive into Redux

Jun 13, 2019


A Deep Dive into Redux

Building stateful modern applications is complex. As state mutates, the app becomes unpredictable and hard to maintain. That's where Redux comes in. Redux is a lightweight library that tackles state. Think of it as a state machine.

In this article, I’ll delve into Redux’s state container by building a payroll processing engine. The app will store pay stubs, along with all the extras — such as bonuses and stock options. I’ll keep the solution in plain JavaScript with TypeScript for type checking. Since Redux is super testable, I’ll also use Jest to verify the app.

For the purposes of this tutorial, I’ll assume a moderate level of familiarity with JavaScript, Node, and npm.

To begin, you can initialize this app with npm:

npm init

When asked about the test command, go ahead and put jest. This means npm t will fire up Jest and run all unit tests. The main file will be index.js to keep it nice and simple. Feel free to answer the rest of the npm init questions to your heart’s content.

I’ll use TypeScript for type checking and nailing down the data model. This aids in conceptualizing what we’re trying to build.

To get going with TypeScript:

npm i typescript --save-dev

I’ll keep dependencies that are part of the dev workflow in devDependencies. This makes it clear which dependencies are for developers and which goes to prod. With TypeScript ready, add a start script in the package.json:

"start": "tsc && node .bin/index.js"

Create an index.ts file under the src folder. This separates source files from the rest of the project. If you do an npm start, the solution will fail to execute. This is because you’ll need to configure TypeScript.

Create a tsconfig.json file with the following configuration:

{ "compilerOptions": { "strict": true, "lib": ["esnext", "dom"], "outDir": ".bin", "sourceMap": true }, "files": [ "src/index" ] }

I could have put this configuration in a tsc command-line argument. For example, tsc src/index.ts --strict .... But it’s much cleaner to go ahead and put all this in a separate file. Note the start script in package.json only needs a single tsc command.

Here are sensible compiler options that will give us a good starting point, and what each option means:

strict: enable all strict type checking options, i.e., --noImplicitAny, --strictNullChecks, etc. lib: list of library files included in the compilation outDir: redirect output to this directory sourceMap: generate source map file useful for debugging files: input files fed to the compiler

Because I’ll be using Jest for unit testing, I'll go ahead and add it:

npm i jest ts-jest @types/jest @types/node --save-dev

The ts-jest dependency adds type checking to the testing framework. One gotcha is to add a jest configuration in package.json:

"jest": { "preset": "ts-jest" }

This makes it so the testing framework picks up TypeScript files and knows how to transpile them. One nice feature with this is you get type checking while running unit tests. To make sure this project is ready, create a __tests__ folder with an index.test.ts file in it. Then, do a sanity check. For example:

it('is true', () => { expect(true).toBe(true); });

Doing npm start and npm t now runs without any errors. This tells us we’re now ready to start building the solution. But before we do, let’s add Redux to the project:

npm i redux --save

This dependency goes to prod. So, no need to include it with --save-dev. If you inspect your package.json, it goes in dependencies.

Payroll Engine in Action

The payroll engine will have the following: pay, reimbursement, bonus, and stock options. In Redux, you can’t directly update state. Instead, actions are dispatched to notify the store of any new changes.

So, this leaves us with the following action types:


The PAY_DAY action type is useful for dolling out a check on pay day and keeping track of pay history. These action types guide the rest of the design as we flesh out the payroll engine. They capture events in the state lifecycle — for example, setting a base pay amount. These action events can attach to anything, whether that be a click event or a data update. Redux action types are abstract to the point where it doesn’t matter where the dispatch comes from. The state container can run both on the client and/or server.


Using type theory, I’ll nail down the data model in terms of state data. For each payroll action, say an action type and an optional amount. The amount is optional, because PAY_DAY doesn’t need money to process a paycheck. I mean, it could charge customers but leave it out for now (maybe introducing it in version two).

So, for example, put this in src/index.ts:

interface PayrollAction { type: string; amount?: number; }

For pay stub state, we need a property for base pay, bonus, and whatnot. We’ll use this state to maintain a pay history as well.

This TypeScript interface ought to do it:

interface PayStubState { basePay: number; reimbursement: number; bonus: number; stockOptions: number; totalPay: number; payHistory: Array<PayHistoryState>; }

The PayStubState is a complex type, meaning it depends on another type contract. So, define the payHistory array:

interface PayHistoryState { totalPay: number; totalCompensation: number; }

With each property, note TypeScript specifies the type using a colon. For example, : number. This settles the type contract and adds predictability to the type checker. Having a type system with explicit type declarations enhances Redux. This is because the Redux state container is built for predictable behavior.

This idea isn’t crazy or radical. Here’s a good explanation of it in Learning Redux, Chapter 1 (SitePoint Premium members only).

As the app mutates, type checking adds an extra layer of predictability. Type theory also aids as the app scales because it’s easier to refactor large sections of code.

Conceptualizing the engine with types now helps to create the following action functions:

export const processBasePay = (amount: number): PayrollAction => ({type: BASE_PAY, amount}); export const processReimbursement = (amount: number): PayrollAction => ({type: REIMBURSEMENT, amount}); export const processBonus = (amount: number): PayrollAction => ({type: BONUS, amount}); export const processStockOptions = (amount: number): PayrollAction => ({type: STOCK_OPTIONS, amount}); export const processPayDay = (): PayrollAction => ({type: PAY_DAY});

What’s nice is that, if you attempt to do processBasePay('abc'), the type checker barks at you. Breaking a type contract adds unpredictability to the state container. I’m using a single action contract like PayrollAction to make the payroll processor more predictable. Note amount is set in the action object via an ES6 property shorthand. The more traditional approach is amount: amount, which is long-winded. An arrow function, like () => ({}), is one succinct way to write functions that return an object literal.

Reducer as a Pure Function

The reducer functions need a state and an action parameter. The state should have an initial state with a default value. So, can you imagine what our initial state might look like? I’m thinking it needs to start at zero with an empty pay history list.

For example:

const initialState: PayStubState = { basePay: 0, reimbursement: 0, bonus: 0, stockOptions: 0, totalPay: 0, payHistory: [] };

The type checker makes sure these are proper values that belong in this object. With the initial state in place, begin creating the reducer function:

export const payrollEngineReducer = ( state: PayStubState = initialState, action: PayrollAction): PayStubState => {

The Redux reducer has a pattern where all action types get handled by a switch statement. But before going through all switch cases, I’ll create a reusable local variable:

let totalPay: number = 0;

Note that it’s okay to mutate local variables if you don’t mutate global state. I use a let operator to communicate this variable is going to change in the future. Mutating global state, like the state or action parameter, causes the reducer to be impure. This functional paradigm is critical because reducer functions must remain pure. If you’re struggling with this paradigm, check out this explanation from JavaScript Novice to Ninja, Chapter 11 (SitePoint Premium members only).

Start the reducer’s switch statement to handle the first use case:

switch (action.type) { case BASE_PAY: const {amount: basePay = 0} = action; totalPay = computeTotalPay({...state, basePay}); return {...state, basePay, totalPay};

I’m using an ES6 rest operator to keep state properties the same. For example, ...state. You can override any properties after the rest operator in the new object. The basePay comes from destructuring, which is a lot like pattern matching in other languages. The computeTotalPay function is set as follows:

const computeTotalPay = (payStub: PayStubState) => payStub.basePay + payStub.reimbursement + payStub.bonus - payStub.stockOptions;

Note you deduct stockOptions because the money will go towards buying company stock. Say you want to process a reimbursement:

case REIMBURSEMENT: const {amount: reimbursement = 0} = action; totalPay = computeTotalPay({...state, reimbursement}); return {...state, reimbursement, totalPay};

Since amount is optional, make sure it has a default value to reduce mishaps. This is where TypeScript shines, because the type checker picks up on this pitfall and barks at you. The type system knows certain facts so it can make sound assumptions. Say you want to process bonuses:

case BONUS: const {amount: bonus = 0} = action; totalPay = computeTotalPay({...state, bonus}); return {...state, bonus, totalPay};

This pattern makes the reducer readable because all it does is maintain state. You grab the action’s amount, compute total pay, and create a new object literal. Processing stock options is not much different:

case STOCK_OPTIONS: const {amount: stockOptions = 0} = action; totalPay = computeTotalPay({...state, stockOptions}); return {...state, stockOptions, totalPay};

For processing a paycheck on pay day, it’ll need to blot out bonus and reimbursement. These two properties don’t remain in state per paycheck. And, add an entry to pay history. Base pay and stock options can stay in state because they don’t change as often per paycheck. With this in mind, this is how PAY_DAY goes:

case PAY_DAY: const {payHistory} = state; totalPay = state.totalPay; const lastPayHistory = payHistory.slice(-1).pop(); const lastTotalCompensation = (lastPayHistory && lastPayHistory.totalCompensation) || 0; const totalCompensation = totalPay + lastTotalCompensation; const newTotalPay = computeTotalPay({...state, reimbursement: 0, bonus: 0}); const newPayHistory = [...payHistory, {totalPay, totalCompensation}]; return {...state, reimbursement: 0, bonus: 0, totalPay: newTotalPay, payHistory: newPayHistory};

In an array like newPayHistory, use a spread operator, which is the reverse of rest. Unlike rest, which collects properties in an object, this spreads items out. So, for example, [...payHistory]. Even though both these operators look similar, they aren’t the same. Look closely, because this might come up in an interview question.

Using pop() on payHistory doesn’t mutate state. Why? Because slice() returns a brand new array. Arrays in JavaScript are copied by reference. Assigning an array to a new variable doesn’t change the underlying object. So, one must be careful when dealing with these types of objects.

Because there’s a chance lastPayHistory is undefined, I use poor man’s null coalescing to initialize it to zero. Note the (o && || 0 pattern to coalesce. Maybe a future version of JavaScript or even TypeScript will have a more elegant way of doing this.

Every Redux reducer must define a default branch. To make sure state doesn’t become undefined:

default: return state;

The post A Deep Dive into Redux appeared first on SitePoint.

Code Challenge #2: The Test of Characters

Jun 13, 2019


I’ve been a big fan Kushagra Gour since the early days of Webmaker – his CodePen-like code playground running as a Chrome Extension. I use it most days. More recently he teamed up Kushagra Agarwal to work on a new project ‘‘ – a cool and original blend of CSS coding and golf. Be warned […]

The post Code Challenge #2: The Test of Characters appeared first on SitePoint.

So, Do We Have a Winner for Code Challenge #1?

Jun 13, 2019


It’s been a week since we launched our quick Code Challenge #1, which means it’s time to announce a winner! It was tricky. While the quantity of entries wasn’t high, there’s no questioning the quality of our winning entries. But first, let’s run through a few different approaches to the challenge we supplied. My turn […]

The post So, Do We Have a Winner for Code Challenge #1? appeared first on SitePoint.

10 Top Chrome Extensions for Your Web Development Workflow

Jun 12, 2019


As web developers we work in a very fast paced industry and staying on top of things can sometimes be a challenge. That's why I believe we should take full advantage of whatever tools we have at our disposal to help keep our heads above water. Today I'm going to present ten Chrome extensions that are geared to optimizing your web development workflow, hopefully making you that little bit more productive.

What are Chrome Extensions?

As can be read on Chrome's developer portal, extensions are small software programs that can customize your browsing experience. This can be anything from a spelling and grammar checker that checks your writing as you type, to a password manager that saves your login details for your favorite sites.

There are literally thousands of extensions available for Chrome, all of which can be downloaded for free from the Chrome Web Store. You can check which extensions you currently have installed by visiting the following link in your browser: chrome://extensions/.

Why Chrome?

This article focuses on the Google Chrome browser due to its huge market share (currently 65% and rising). There are also many Chrome-based browsers which support extensions. These include Brave, Vivaldi and, coming soon, Microsoft Edge. However, we should remember that Chrome isn't the only show in town and that many of the extensions mentioned here have a Firefox and/or Opera equivalent.

Finally, before we dive into the extensions, take a minute to remember that Chrome is proprietary software published by Google. As we all know, there are privacy concerns associated with using Google products, so maybe head over to GitHub and check out the ungoogled-chromium project instead. As the name suggests, this is Google Chromium, sans integration with Google.

1. Web Developer

We'll start off with the Swiss Army knife of extensions. With over 1 million users and a 4.5 star rating on the Chrome Web Store, Web Developer is something of a must have. It adds a toolbar button to Chrome which, when clicked, displays a plethora of tools that can be used on any web page. These are grouped by category (CSS, forms, images etc) and allow you to do such things as disable JavaScript, outline images with missing alt attributes, resize the browser window, validate a page's HTML, view a page's meta tag information and much more.

Web Developer Chrome extension

You can download it here.

2. Your Framework's Developer Tools

If you're developing an app with a JavaScript framework and you're not using that framework's developer tools, then you're probably doing it wrong. Let me explain using Vue as an example.

If you have a Vue app which you need to debug, or you just want to see what's going on under the hood, then what do you do? Inspecting the page's source will show you the HTML that Vue is rendering, but there is much more to a Vue app than that. What about a component's props, data or computed properties? Or your app's state or routing? How do you inspect any of those?

The good news is that the Vue.js dev tools have you covered. Simply install the extension and open it up on a page running a development build of Vue to see exactly what is happening in your app.

Vue.js Dev Tools

Here are links to download the dev tools for the big three frameworks.

Vue React Ember 3. Daily 2.0 - Source for Busy Developer

As we work in a fast paced industry, keeping up with news and goings on can sometimes be a challenge. Enter Daily 2.0, an extension that gathers the latest web development and tech posts from around the internet and presents them in an attractive masonry-style lay out on your new tab page.

The extension is easy to use. When you install it you are asked to pick from a bunch of categories that interest you and Daily 2.0 does the rest. Hovering over the sidebar on the new tab page allows you to filter your feed based on tags and sources.

Daily 2.0 - Source for Busy Developers

You can get it here.

4. Toggl Button: Productivity & Time Tracker

If you're a busy freelancer, if you work remotely, or if you just need to track the time you're spending on a project, then Toggl is for you.

This extension requires you to create an account before you can use it. Once you're logged in it enables quick and easy real time productivity tracking with all the data stored in your Toggl account. It comes with a built-in Pomodoro timer, as well as integrations for a whole host of internet services (such as GitHub, Trello and Slack). One of my favorite features is that it will pop up a notification when you've been idle and the timer was running, allowing you to discard the time.

Toggl Button: Productivity & Time Tracker

Toggl can be downloaded here.

5. Lighthouse

Lighthouse is an open-source, automated tool for improving the performance and quality of your web pages. You can either install it via the Chrome Web Store or, as of Chrome version 60, you can run it directly from the Audits tab of the browser's DevTools (press F12 and select Audits).

Once you have opened Lighthouse, click Generate report and optionally select which audit categories to include. Lighthouse will run the selected audits against the page, and generate a report on how well the page did. From there, you can use the failing audits as indicators of how to improve the page. Each audit also includes links to further reading and potential fixes.

Lighthouse is produced by Google, and presumably uses the same ranking factors as their search engine. This means it can offer you some of the best advice out there on how to optimize your site.


You can grab it here.

6. OneTab

The post 10 Top Chrome Extensions for Your Web Development Workflow appeared first on SitePoint.

How Analytics Helped Solve a UX Issue

Jun 11, 2019


How Analytics Helped Solve a UX Issue

UX and analytics make a great team. Your website analytics can give you insights enabling you to learn about your users, track their journeys, and find potential problem areas. You can use the quantitative data to inform your qualitative UX approach. Remember, your analytics tell you what’s happening on your website, while UX techniques such as usability testing will help uncover why things are happening.

There are various ways that Google Analytics can be used to uncover how your users are navigating your website. Within the Pages report you can drill down to see how users are navigating to, and from, a selected page in your website. But the User Flow and Behavior Flow reports give more information on multi-step journeys from your most popular landing pages onwards.

user/behavior flow in google analytics

These reports can be hard to analyze, particularly for large websites, due to the fact that there are unlikely to be a series of clear pathways through your website. You’ll find that there are huge numbers of paths that different users can take, which makes finding insights from these reports quite challenging. However, they can be useful for getting a good top-level overview and showing the most dominant pathways through a site. While they suffer from grouping multiple pages, you can often get a good idea of the most common journeys taken by users.

One example of how I’ve used these reports in the past to inform my UX work has been looking out for pogo sticking.

Pogo Sticking

Pogo sticking describes where users bounce between two pages on a website instead of progressing their journey through the site. It can be a sign of confusion on the users’ part and is unlikely to help you convert those users.

The Nielson/Norman group wrote this guide to pogo sticking, which explains it in more detail. It covers some possible reasons behind pogo sticking behavior, and also gives some potential solutions to these problems.

The post How Analytics Helped Solve a UX Issue appeared first on SitePoint.

SitePoint Premium New Releases: Design, Git, Android, Swift + More

Jun 7, 2019


We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Exploring Git Workflows

Most of us use version control systems on a daily basis. But even though we may use identical systems, we use them in different ways. In this tutorial, Claudio describes GitFlow, the current workflow used by his team.

Read Exploring Git Workflows.

Beginning Android Programming with Android Studio

This hands-on introduction to creating Android apps shows how to install and get started with Android Studio 2, display notifications, create rich user interfaces, use activities and intents, master views and menus, manage data, work with SMS, and package and publish apps to the Android market.

Read Beginning Android Programming with Android Studio.

A Beginner’s Guide to Deployment with Continuous Integration

This guide tackles an important technique in the processing of automating the deployment process—continuous integration (CI). CI achieves efficiency by removing unnecessary bottlenecks in the deployment process, thereby making the transition from a commit to production smooth and consistent.

Read A Beginner’s Guide to Deployment with Continuous Integration.

Design for Hackers

This book explores principles of beautiful design, covering color theory, medium and form, classical principles and techniques, culture and context, the importance, purpose and constraints of design, fonts, scale and proportion, and even ancient graffiti, Monet, the iPhone, and much more.

Read Design for Hackers.

Swift 4 Protocol-Oriented Programming - Third Edition

Build fast and powerful applications by harnessing the power of protocol-oriented programming in Swift 4. Learn from real-world cases, creating a flexible codebase with protocols and protocol extensions, leveraging the power of generics to create very flexible frameworks.

Read Swift 4 Protocol-Oriented Programming - Third Edition.

And More to Come…

We're releasing new content on SitePoint Premium almost every day, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Design, Git, Android, Swift + More appeared first on SitePoint.

A Deep Dive into User Research Methods

Jun 6, 2019


A Deep Dive into User Research Methods

User research plays a crucial role in shaping any successful product or service. It keeps the user at the heart of the experience by tailoring it to their needs, and in turn provides real advantage over competitors. But with a growing arsenal of different research methods out there, it can be a challenge to know which is best to use, and when.

This guide offers an overview of the fundamentals for each of the most commonly used methods, providing direction on when to use them — and more importantly, why.

We’ll cover:

the origins of user research discovery and exploratory research quant and qual, and the difference between them core methodologies: user interviews ethnography and field studies surveys and questionnaires analytics and heatmaps card sorts and tree tests usability studies further reading and resources key takeaways The Origins of User Research

Product designers and engineers have incorporated user feedback into their process for centuries. However, it wasn’t until 1993 that the term “user experience” (UX) was coined by Don Norman during his time at Apple.

As the discipline of UX evolved and matured, practitioners began to use investigative research techniques from other fields, such as science and market research. This enabled decisions to be informed by the end user, rather than the design teams’ assumptions, laying the groundwork for UX research as we know it today.

That’s a quick rundown of the origins. Now let’s dive into some research frameworks.

Discovery and Evaluative Research

User-centered design means working with your users all throughout the project — Don Norman

Broadly speaking, user research is used to either discover what people want and need or evaluate if ideas are effective. The methods to achieve these two distinct outcomes can be loosely divided into two groups.

Strategize: Discovery Research

Methods that help to answer unknowns at the beginning of a project can be referred to as Discovery Research. These methods range from reviewing existing reports, data and analytics to conducting interviews, surveys and ethnographic studies. These methods ensure that you have a solid understanding of who your user is, what they need and the problems they face in order to begin developing a solution.

Execute and Assess: Evaluative Research

Once a clearer picture of the end user and their environment has been established, it’s time to explore possible solutions and test their validity. Usability studies are the most common method employed here. Evaluative research provides you with the knowledge you need to stay focussed on the user and their specific requirements.


Discovery Research Methods Evaluative Research Methods field study diary study one-to-one interview focus group behavioral analytics review open card sort email survey contextual inquiry remote usability testing closed card sort tree test benchmarking analytics review heatmaps popup poll usability benchmark testing impression testing Quant and Qual, and the Difference Between Them

Although every design problem is different, it’s generally agreed that a combination of both qualitative and quantitative research insights will provide a balanced foundation with which to form a more successful design solution. But what do these pronunciation-averse words mean?

Quantitative (statistical) research techniques involve gathering large quantities of user data to understand what is currently happening. This answers important questions such as “where do people drop off during a payment process”, or “which products were most popular with certain user groups” and “what content is most/least engaging”.

Quantitative research methods are often used to strategize the right direction at the start of a project and assess the performance at the end using numbers or metrics. Common goals include:

comparing two or more products or designs getting benchmarks to compare the future design against calculating expected cost savings from some design changes pie chart of application completion rates — 52.66% completed, 47.34% not completedQuantitative data analysis can offer useful insights such as abandonment points on a form. This can lead to further qualitative studies to understand why.

Qualitative (observational) research techniq