SitePoint

SitePoint

Description

Learn CSS | HTML5 | JavaScript | Wordpress | Tutorials-Web Development | Reference | Books and More

Link: www.sitepoint.com

Episodes

The Devastating Price Developers Pay for Working Hard

Apr 1, 2020

Description:

The Devastating Price Developers Pay for Working Hard

You're a wonderful developer.

You come early, and you stay late. Your code is clear and well documented; you're eager to help others, and you're able to handle 3x the work your co-workers can handle.

You're an amazing developer, and that's your problem.

Your Boss and your co-workers all want your best work. It's an unspoken expectation in the workplace. No one prepares you for the horrible consequences that come with doing your job well.

The devastating price you pay for working hard

There are several unpleasant downsides that come with exceptional performance and hard work. There's one reward in particular that acts as a demotivator that destroys job satisfaction.

You're probably already familiar with it.

The reward for working hard and performing above expectations at your job is more work.

This is devastating to developers in the long term, and here are a few reasons why …

1. Price's Law becomes a dysfunctional cycle
Information scientist and physicist Derek de Solla Price discovered that the square root of the number of people in any domain does 50 percent of the work. If there are ten developers on your team, 3 of them do half the work. Who are these employees? If you're an A-player, you're already doing far more than your co-workers.

This is devastating because it creates a vicious cycle. In many organizations, you're rewarded with more and more work, but your salary, titles or earning power remains unchanged. When this happens, your employer steals from your future, minimizing your earning power and your ability to get a new job at an appropriate salary level with an appropriate title.

2. Mercenaries corrupt patriots
According to Gallup and Steve Rasmussen, former CEO of Nationwide, your co-workers are either Patriots or Mercenaries.

If you're a Patriot, you're engaged. You believe in your managers and co-workers, and they believe in you. You're focused on taking care of your organization because you trust your co-workers to look out for you. If you're a Mercenary, you're focused primarily on yourself. You're a job hopper or social climber. You're focused on getting as much value as you can for yourself; forget the company!

The employees who are willing to let others work for them? They're usually mercenaries, people who are willing to do the bare minimum to collect a paycheck. Left unchecked, these mercenaries kill morale in the company, causing A-players to leave or become B- and C-players.

3. Crab mentality sends A-players to the bottom of the social hierarchy
Mediocre employees don't like high achievers, and high achievers don't like mediocre employees. If you're an A-player who's surrounded by mediocre B- and C-player employees, you'll be punished for excellence.

What does this mean specifically?

Your co-workers will attempt to destroy the self-confidence of any employee (you) who achieves success or outperforms the rest of the group due to envy, spite, resentment, conspiracy or competitiveness. This isn't mere speculation: the tall poppy syndrome, crab bucket mentality and tragedy of the commons are all examples of this kind of behavior in action. If you're a great developer and you're surrounded by mediocrity, you'll be punished for it.

"Yeah, well, I don't care what anyone thinks anyway!"

Here's why you should care. No man is an island. At some point, you're going to need help from others to do your job or complete a task. Want to find another job? You'll need references from your manager and co-workers.

4. Mercenaries sabotage patriots
Their methods are simple. They get A-player patriots to do the work for them. Then they immediately take the credit for the A-player's hard work. Mercenaries use a variety of strategies to accomplish this.

Machiavellianism, or interpersonal manipulation to shape alliances, is used to gain and maintain social status, regardless of their actual performance, to gain leverage against opponents or poison the well, turning managers against A-players they perceive as a threat.

Indirect aggression is characterized by bullying, slander, gossip, shaming or ostracizing others. It's common in office settings and typically involves some reputation destruction. The thing with indirect aggression is that it's incredibly difficult to prove and harder still to counteract unless you have a clear understanding of what it is and how it works.

Leverage. Malicious mercenaries will use anything as leverage: past mistakes, secrets shared in confidence, insecurities — anything that will get others (you) to do what they want when they want. For whatever reason, it's important that they win and you lose.

Successful patriots use their abilities and accomplishments as leverage to counter mercenary bad behavior. But they'll also rely on strong relationships with others as a balm for scheming behavior. Unfortunately this is the exception, not the rule.

See what I mean?

Working hard comes with a devastating price. So what's the alternative then? Doing the bare minimum? Working to keep my head down and collect a steady paycheck?

Many employees do that already.

Doing that is worse, because it comes with its own set of miserable problems. It's difficult to find and keep a job. The mediocre aren't paid all that well, and they're the first to go if your company initiates layoffs or mass firings.

The post The Devastating Price Developers Pay for Working Hard appeared first on SitePoint.

30 Web Development Checklists Every Team Should Keep Handy

Mar 31, 2020

Description:

30 Web Development Resources Every Team Should Keep Handy

Building a website or app and making it available for the world is a complex business. A number of factors must come together to ensure the final product is successful. That means attracting and keeping visitors, meeting business goals, and minimizing problems. You can deliver a better product with the help of web development checklists.

As in everyday life, checklists can be a great organizational tool. They keep web development teams on track. They also ensure important tasks aren't overlooked in the rush to complete a project.

We searched the web for the most useful web development checklists. They cover everything from front-end and performance to SEO and marketing.

Launch (Pre-launch/Post-launch) #1 The Essential Pre-launch Checklist for Your Website

A practical checklist that includes:

design elements to look out for before launch functionality cross-browser testing SEO and content editing #2 Website Launch Checklist

This is a nice checklist tool built by Glasgow developer Fraser Boag.

This tools lets you:

check items as you complete each task grey out an item if it's not applicable reset the checklist to get it ready for the next project.

Changes will be saved using cookies, so you can easily use the checklist throughout the duration of your project.

The checklist covers content, benchmarks and performance, compatibility, accessibility, analytics, and more. Items in the list include, among other things:

content editing navigation usability links testing GDPR compliance HTML and CSS validity styles and scripts minification #3 The Essential Launch Checklist for Web Apps and Mobile Apps

Ben Cheng presents "… a simple launch checklist for web and mobile apps … for product managers to quickly test performance of their apps."

Not only does the author include important tasks to complete before launching, he also explains the why behind his choices.

The list presents items belonging to the following areas:

performance security broken links compatibility SEO/social nice to haves, such as a 404 page, print stylesheets, and more. Back-end (Database and Server) and Security #4 Database Testing Checklist

For data persistence, you most likely need a database. The smooth running and integrity of the database are crucial to a fast and secure website or app. In this checklist, you'll find items relating to:

database integrity stored procedures field validation constraints security transactions and more. #5 Back-end Best Practices

These are stack-agnostic guidelines for best practices that encompass various back-end architectures. It’s comprehensive, including best practices on:

data storage solutions security server environments application monitoring, and much more.

Towards the end of the document, you’ll find a responsibility checklist to organize your team’s work. You'll also find a release checklist for the launch of your website or app.

Front-end #6 A Front-end Deployment Checklist

If you code landing pages, Fred Rocha's deployment checklist is what you need. It's succinct and to the point. It includes technical front-end items such as:

checking performance validating the markup checking the console for JavaScript errors, and more. #7 The Front-end Checklist by David Dias

It describes itself as "perfect for modern websites and meticulous developers". This is an online interactive tool that allows you to enter the project's URL and get a complete report on the following areas:

head HTML webfonts CSS JavaScript images accessibility performance SEO

The check is thorough and reveals which items in the various areas deserve high, medium or low priority.

React App Deployment and Performance #8 Live Readiness Checklist of a React App

This is a list of tasks you need to complete before your React app is ready for production.

#9 Death by a Thousand Cuts: A Checklist for Eliminating Common React Performance Issues

This is a six-item checklist with fun and clear explanations of how to go about implementing each of the tasks on the list for a blazing fast React app.

Cross-browser Testing #10 Cross-browser Testing Checklist Before Going Live

Deeksha Agarwal offers a top-notch checklist to ensure your website or app works and looks as intended in all browsers and platforms on your local dev environment before the launch. Among the items you'll find in this list are:

element alignment, and other HTML and CSS cross-browser issues font rendering API connections, and much more. #11 Cross-browser Testing Checklist

Rajkumar offers this handy checklist where he mentions all the items you need to test on multiple operating systems and browsers.

Accessibility #12 Checklist of Checkpoints for Web Content Accessibility Guidelines 1.0

This W3C checklist includes all the items you need to consider so that more people can access and use your site. The items are grouped according to a priority number from one to three.

It covers:

providing text for non-text elements organizing documents so they can be read without stylesheets color contrast appropriate structure and elements for HTML documents expanding acronyms and abbreviations the first time they appear logical tab navigation, and more. #13 WebAIM's WCAG 2 Checklist

This checklist presents WebAIM’s (Web Accessibility in Mind) accessibility recommendations for those seeking WCAG conformance.

#14 The A11Y Project Checklist

This A11Y Project checklist organizes items under the following headings:

content global code keyboard images headings lists controls tables forms media appearance animation color contrast mobile/touch. #15 The Definitive Website Accessibility Checklist

This checklist is presented in a great, user-friendly table where items are grouped on the basis of their accessibility priority level in accordance with WCAG 2.0 guidelines:

Level A makes your website or app accessible to some users Level AA makes it available to almost all users Level AAA makes it available to all users.

The post 30 Web Development Checklists Every Team Should Keep Handy appeared first on SitePoint.

MEAN Stack: Build an App with Angular and the Angular CLI

Mar 30, 2020

Description:

MEAN Stack: Build an App with Angular and the Angular CLI

In this tutorial, we’re going to look at managing user authentication in the MEAN stack. We’ll use the most common MEAN architecture of having an Angular single-page app using a REST API built with Node, Express and MongoDB.

When thinking about user authentication, we need to tackle the following things:

let a user register save user data, but never directly store passwords let a returning user log in keep a logged in user’s session alive between page visits have some pages that can only been seen by logged in users change output to the screen depending on logged in status (for example, a “login” button or a “my profile” button).

Before we dive into the code, let’s take a few minutes for a high-level look at how authentication is going to work in the MEAN stack.

The MEAN Stack Authentication Flow

So what does authentication look like in the MEAN stack?

Still keeping this at a high level, these are the components of the flow:

user data is stored in MongoDB, with the passwords hashed CRUD functions are built in an Express API — Create (register), Read (login, get profile), Update, Delete an Angular application calls the API and deals with the responses the Express API generates a JSON Web Token (JWT, pronounced “Jot”) upon registration or login, and passes this to the Angular application the Angular application stores the JWT in order to maintain the user’s session the Angular application checks the validity of the JWT when displaying protected views the Angular application passes the JWT back to Express when calling protected API routes.

JWTs are preferred over cookies for maintaining the session state in the browser. Cookies are better for maintaining state when using a server-side application.

The Example Application

The code for this tutorial is available on GitHub. To run the application, you’ll need to have Node.js installed, along with MongoDB. (For instructions on how to install, please refer to Mongo’s official documentation — Windows, Linux, macOS).

The Angular App

To keep the example in this tutorial simple, we’ll start with an Angular app with four pages:

home page register page login page profile page

The pages are pretty basic and look like this to start with:

Screenshots of the app

The profile page will only be accessible to authenticated users. All the files for the Angular app are in a folder inside the Angular CLI app called /client.

We’ll use the Angular CLI for building and running the local server. If you’re unfamiliar with the Angular CLI, refer to the Building a Todo App with Angular CLI tutorial to get started.

The REST API

We’ll also start off with the skeleton of a REST API built with Node, Express and MongoDB, using Mongoose to manage the schemas. This API should initially have three routes:

/api/register (POST), to handle new users registering /api/login (POST), to handle returning users logging in /api/profile/USERID (GET), to return profile details when given a USERID

Let's set that up now. We can use the express-generator tool to create a lot of the boiler plate for us. If this is new for you, we have a tutorial on using it here.

Install it with npm i -g express-generator. Then, create a new Express app, choosing Pug as the view engine:

express -v pug mean-authentication

When the generator has run, change into the project directory and install the dependencies:

cd mean-authentication npm i

At the time of writing, this pulls in an outdated version of Pug. Let's fix that:

npm i pug@latest

We can also install Mongoose while we’re at it:

npm i mongoose

Next, we need to create our folder structure.

Remove the public folder: rm -rf public. Create an api directory: mkdir api. Create a controllers, a models, and a routes directory in the api directory: mkdir -p api/{controllers,models,routes}. Create a authenication.js file and a profile.js file in the controllers directory: touch api/controllers/{authentication.js,profile.js}. Create a db.js file and a users.js file in the models directory: touch api/models/{db.js,users.js}. Create an index.js file in the routes directory: touch api/routes/index.js.

When you're done, things should look like this:

. └── api ├── controllers │ ├── authentication.js │ └── profile.js ├── models │ ├── db.js │ └── users.js └── routes └── index.js

Now let's add the API functionality. Replace the code in app.js with the following:

require('./api/models/db'); const cookieParser = require('cookie-parser'); const createError = require('http-errors'); const express = require('express'); const logger = require('morgan'); const path = require('path'); const routesApi = require('./api/routes/index'); const app = express(); // view engine setup app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'pug'); app.use(logger('dev')); app.use(express.json()); app.use(express.urlencoded({ extended: false })); app.use(cookieParser()); app.use(express.static(path.join(__dirname, 'public'))); app.use('/api', routesApi); // catch 404 and forward to error handler app.use((req, res, next) => { next(createError(404)); }); // error handler app.use((err, req, res, next) => { // set locals, only providing error in development res.locals.message = err.message; res.locals.error = req.app.get('env') === 'development' ? err : {}; // render the error page res.status(err.status || 500); res.render('error'); }); module.exports = app;

Add the following to api/models/db.js:

require('./users'); const mongoose = require('mongoose'); const dbURI = 'mongodb://localhost:27017/meanAuth'; mongoose.set('useCreateIndex', true); mongoose.connect(dbURI, { useNewUrlParser: true, useUnifiedTopology: true }); mongoose.connection.on('connected', () => { console.log(`Mongoose connected to ${dbURI}`); }); mongoose.connection.on('error', (err) => { console.log(`Mongoose connection error: ${err}`); }); mongoose.connection.on('disconnected', () => { console.log('Mongoose disconnected'); });

Add the following to api/routes/index.js:

const ctrlAuth = require('../controllers/authentication'); const ctrlProfile = require('../controllers/profile'); const express = require('express'); const router = express.Router(); // profile router.get('/profile/:userid', ctrlProfile.profileRead); // authentication router.post('/register', ctrlAuth.register); router.post('/login', ctrlAuth.login); module.exports = router;

Add the following to api/controllers/profile.js:

module.exports.profileRead = (req, res) => { console.log(`Reading profile ID: ${req.params.userid}`); res.status(200); res.json({ message : `Profile read: ${req.params.userid}` }); };

Add the following to api/controllers/authentication.js:

module.exports.register = (req, res) => { console.log(`Registering user: ${req.body.email}`); res.status(200); res.json({ message : `User registered: ${req.body.email}` }); }; module.exports.login = (req, res) => { console.log(`Logging in user: ${req.body.email}`); res.status(200); res.json({ message : `User logged in: ${req.body.email}` }); };

Ensure that Mongo is running and then, finally, start the server with npm run start. If everything is configured properly, you should see a message in your terminal that Mongoose is connected to mongodb://localhost:27017/meanAuth, and you should now be able to make requests to, and get responses from, the API. You can test this with a tool such as Postman.

The post MEAN Stack: Build an App with Angular and the Angular CLI appeared first on SitePoint.

How Aaron Osteraas Made the Content to Code Career Transition

Mar 27, 2020

Description:

From Tinkering to Developing: A Programmer’s Journey

As Aaron Osteraas can tell you, the path between discovering what you want to do for a living and actually doing it is rarely linear.

Now a Software Engineer at Tigerspike, a digital services company headquartered in Sydney, Aaron’s journey toward becoming a developer began when he was in high school, yet it wasn’t until his early 30s that he obtained his first full-time development job. The years in between were filled with starts and stops, challenges and successes, and a whole lot of tinkering.

“I was always tinkering with the family computer, which was mostly, ‘oh god I've broken it how do I fix it before I get in trouble,’" Aaron said of his technical beginnings. He had an appetite for building and modifying hardware, which he attributes to the joy that comes from doing something with your hands. He’d collect spare hardware, buy and sell parts, and at times resort to scrounging and trading. “There were computer parts strewn everywhere,” he said.

But by the time he graduated high school, Aaron had checked out academically. He wasn’t confident his grades were good enough for university, so he enrolled in TAFE, Australia’s largest vocational training provider, and spent six months learning XML before realizing that “making mobile websites for 2004’s best mobile phones in XML was pretty damn far from my idea of a good time.”

So he dropped out of TAFE and eventually found himself working in the world of content, where he stayed for seven years. Though he worked his way up to a managerial and editorial role for a handful of companies within the technical realm, Aaron found himself consistently unsatisfied.

I had this itch to solve technical problems, and working in content, I wasn't able to scratch it. That's what a lot of programming is, problem-solving. And it's not that this is unique to programming, it's just the type of problems – and solutions to them – are more enjoyable to me.

Back to School

During his long stretch in content, Aaron maintained enough of an interest in tinkering and programming to eventually enroll in a Software Engineering degree program.

I took one subject to start off, as I felt I needed to validate two things: one, that I could learn to study again, and two, that I would enjoy it.

Aaron found the validation he was after, but it wasn’t until a few years later, when he learned his company had been acquired and his job was on the line, that he decided to leave content behind and commit fully to becoming a developer. Knowing he could be let go in as little as a week, Aaron enrolled at RMIT University full-time to pursue a degree in Software Engineering.

Aaron was finally where he belonged, but it wasn’t easy.

There was a lot of frustration. I found certain languages, concepts, and themes difficult to grasp, and others came with remarkable ease. So when you're going from, ‘How easy and fun is this!’ to swearing at the computer asking why something isn't working, it can be emotionally turbulent.

In conjunction with the difficult subject matter was the overwhelming amount of career paths to choose from.

The world of programming is outrageously broad, there are innumerable paths you can take, and there's temptation to try and take them all because everyone loves the new shiny thing.

The more career paths he discovered, the less sure of himself he grew.

The post How Aaron Osteraas Made the Content to Code Career Transition appeared first on SitePoint.

The Ultimate ADA Compliance Guide

Mar 26, 2020

Description:

The Ultimate ADA Compliance Guide

This article was created in partnership with Inbound Junction. Thank you for supporting the partners who make SitePoint possible.

Based on web accessibility data, the ADA compliance-related lawsuits reached over 2,000 cases in 2019. ADA Compliance is Mandatory for Digital Agencies. The challenge, however, is knowing how to build and design ADA-compliant websites. That’s the problem we’ll help you address in this post.

In this ultimate ADA compliance guide, we’ve laid out the things your web design agency needs to know to make accessible websites.

The number of lawsuits will only keep growing if website owners, developers, and agencies continue to be non-compliant with the ADA.

If you don’t want to be a part of that statistic, you’ll need to ensure that your agency and client websites offer accessibility to persons with disabilities and adhere to ADA standards.

What is the ADA, anyway?

The Americans with Disabilities Act is a civil rights law that was enacted in 1990 to prohibit discrimination against persons with disabilities in every area of public life.

This includes non-discrimination in schools, jobs, transportation, and all private and public places for general access — and those considered as “public accommodation.”

For businesses, the required accommodations under the ADA include interface devices for the visually impaired, qualified interpreters or interpretive tools for the hearing impaired, ramp access for mobility devices like wheelchairs, and more.

In the digital space, specifically websites, ADA compliance means providing web accessibility features and functionalities that allow persons with disabilities to use sites effectively.

Originally, the ADA defined persons with disabilities as those with conditions that substantially limit major life activities — which somehow constricted what was considered as a disability.

In the 2018 amendment, however, the ADA’s scope became broader, changing the meaning of “major life activity” to include daily life functions like performing manual operations.

With this new definition, businesses will need to provide accessibility to a wide range of disabilities to adhere to ADA standards.

Why should web design agencies know about it?

If your web design agency creates websites that aren’t compliant with the ADA standards, you put your clients at risk of getting slapped with ADA-related lawsuits — and that’s not where you want to be.

By providing non-compliant sites to your clients, not only will you get some flak from them due to the legal problems they’re bound to get in, but you’ll also wreck your agency’s credibility.

It’s also worth pointing out that the ADA doesn’t specify the technical requirements for achieving compliance. Instead, it gives you the flexibility on how you can make your websites accessible.

Unfortunately, this doesn’t give you a lot to work with to comply with legislation fully.

The Department of Justice (DOJ) and the US courts previously used the Web Content Accessibility Guidelines (WCAG) 2.1 Level AA success criteria as a standard for assessing the accessibility of websites.

The WCAG 2.1 provides layers of standards to help you achieve web accessibility, including principles, basic guidelines, success criteria, and sufficient and advisory techniques.

Although the WCAG 2.1 isn’t formally codified into US law, it’s currently the best and safest standard you can follow to comply with web accessibility requirements and the ADA.

The post The Ultimate ADA Compliance Guide appeared first on SitePoint.

Build a Node.js CRUD App Using React and FeathersJS

Mar 26, 2020

Description:

Build a Node.js CRUD App Using React and FeathersJS

Building a modern project requires splitting the logic into front-end and back-end code. The reason behind this move is to promote code re-usability. For example, we may need to build a native mobile application that accesses the back-end API. Or we may be developing a module that will be part of a large modular platform.

An operator, sat at an old-fashioned telephone switchboard - Build a CRUD App Using React, Redux and FeathersJS

The popular way of building a server-side API is to use Node.js with a library like Express or Restify. These libraries make creating RESTful routes easy. The problem with these libraries is that we'll find ourselves writing a ton of repetitive code. We'll also need to write code for authorization and other middleware logic.

To escape this dilemma, we can use a framework like Feathers to help us generate an API in just a few commands.

What makes Feathers amazing is its simplicity. The entire framework is modular and we only need to install the features we need. Feathers itself is a thin wrapper built on top of Express, where they've added new features — services and hooks. Feathers also allows us to effortlessly send and receive data over WebSockets.

Prerequisites

To follow along with this tutorial, you'll need the following things installed on your machine:

Node.js v12+ and an up-to-date version of npm. Check this tutorial if you need help getting set up. MongoDB v4.2+. Check this tutorial if you need help getting set up. Yarn package manager — installed using npm i -g yarn.

It will also help if you’re familiar with the following topics:

how to write modern JavaScript flow control in modern JavaScript (e.g. async ... await) the basics of React the basics of REST APIs

Also, please note that you can find the completed project code on GitHub.

Scaffold the App

We're going to build a CRUD contact manager application using Node.js, React, Feathers and MongoDB.

In this tutorial, I'll show you how to build the application from the bottom up. We'll kick-start our project using the popular create-react-app tool.

You can install it like so:

npm install -g create-react-app

Then create a new project:

# scaffold a new react project create-react-app react-contact-manager cd react-contact-manager # delete unnecessary files rm src/logo.svg src/App.css src/serviceWorker.js

Use your favorite code editor and remove all the content in src/index.css. Then open src/App.js and rewrite the code like this:

import React from 'react'; const App = () => { return ( <div> <h1>Contact Manager</h1> </div> ); }; export default App;

Run yarn start from the react-contact-manager directory to start the project. Your browser should automatically open http://localhost:3000 and you should see the heading “Contact Manager”. Quickly check the console tab to ensure that the project is running cleanly with no warnings or errors, and if everything is running smoothly, use Ctrl + C to stop the server.

Build the API Server with Feathers

Let's proceed with generating the back-end API for our CRUD project using the feathers-cli tool:

# Install Feathers command-line tool npm install @feathersjs/cli -g # Create directory for the back-end code # Run this command in the `react-contact-manager` directory mkdir backend cd backend # Generate a feathers back-end API server feathers generate app ? Do you want to use JavaScript or TypeScript? JavaScript ? Project name backend ? Description contacts API server ? What folder should the source files live in? src ? Which package manager are you using (has to be installed globally)? Yarn ? What type of API are you making? REST, Realtime via Socket.io ? Which testing framework do you prefer? Mocha + assert ? This app uses authentication No # Ensure Mongodb is running sudo service mongod start sudo service mongod status ● mongod.service - MongoDB Database Server Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled) Active: active (running) since Wed 2020-01-22 11:22:51 EAT; 6s ago Docs: https://docs.mongodb.org/manual Main PID: 13571 (mongod) CGroup: /system.slice/mongod.service └─13571 /usr/bin/mongod --config /etc/mongod.conf # Generate RESTful routes for Contact Model feathers generate service ? What kind of service is it? Mongoose ? What is the name of the service? contacts ? Which path should the service be registered on? /contacts ? What is the database connection string? mongodb://localhost:27017/contactsdb # Install email and unique field validation yarn add mongoose-type-email

Let's open backend/config/default.json. This is where we can configure our MongoDB connection parameters and other settings. Change the default paginate value to 50, since front-end pagination won't be covered in this tutorial:

{ "host": "localhost", "port": 3030, "public": "../public/", "paginate": { "default": 50, "max": 50 }, "mongodb": "mongodb://localhost:27017/contactsdb" }

Open backend/src/models/contact.model.js and update the code as follows:

require('mongoose-type-email'); module.exports = function (app) { const modelName = 'contacts'; const mongooseClient = app.get('mongooseClient'); const { Schema } = mongooseClient; const schema = new Schema({ name : { first: { type: String, required: [true, 'First Name is required'] }, last: { type: String, required: false } }, email : { type: mongooseClient.SchemaTypes.Email, required: [true, 'Email is required'] }, phone : { type: String, required: [true, 'Phone is required'], validate: { validator: function(v) { return /^\+(?:[0-9] ?){6,14}[0-9]$/.test(v); }, message: '{VALUE} is not a valid international phone number!' } } }, { timestamps: true }); ... return mongooseClient.model(modelName, schema); };

Mongoose introduces a new feature called timestamps, which inserts two new fields for you — createdAt and updatedAt. These two fields will be populated automatically whenever we create or update a record. We've also installed the mongoose-type-email plugin to perform email validation on the server.

Now, open backend/src/mongoose.js and change this line:

{ useCreateIndex: true, useNewUrlParser: true }

to:

{ useCreateIndex: true, useNewUrlParser: true, useUnifiedTopology: true }

This will squash an annoying deprecation warning.

Open a new terminal and execute yarn test inside the backend directory. You should have all the tests running successfully. Then, go ahead and execute yarn start to start the back-end server. Once the server has initialized, it should print 'Feathers application started on localhost:3030' to the console.

Launch your browser and access the URL http://localhost:3030/contacts. You should expect to receive the following JSON response:

{"total":0,"limit":50,"skip":0,"data":[]} Test the API with Postwoman

Now let's use Postwoman to confirm all of our endpoints are working properly.

First, let's create a contact. This link will open Postwoman with everything set up to send a POST request to the /contacts endpoint. Make sure Raw input enabled is set to on, then press the green Send button to create a new contact. The response should be something like this:

{ "_id": "5e36f3eb8828f64ac1b2166c", "name": { "first": "Tony", "last": "Stark" }, "phone": "+18138683770", "email": "tony@starkenterprises.com", "createdAt": "2020-02-02T16:08:11.742Z", "updatedAt": "2020-02-02T16:08:11.742Z", "__v": 0 }

Now let's retrieve our newly created contact. This link will open Postwoman ready to send a GET request to the /contacts endpoint. When you press the Send button, you should get a response like this:

{ "total": 1, "limit": 50, "skip": 0, "data": [ { "_id": "5e36f3eb8828f64ac1b2166c", "name": { "first": "Tony", "last": "Stark" }, "phone": "+18138683770", "email": "tony@starkenterprises.com", "createdAt": "2020-02-02T16:08:11.742Z", "updatedAt": "2020-02-02T16:08:11.742Z", "__v": 0 } ] }

We can show an individual contact in Postwoman by sending a GET request to http://localhost:3030/contacts/<_id>. The _id field will always be unique, so you'll need to copy it out of the response you received in the previous step. This is the link for the above example. Pressing Send will show the contact.

We can update a contact by sending a PUT request to http://localhost:3030/contacts/<_id> and passing it the updated data as JSON. This is the link for the above example. Pressing Send will update the contact.

Finally we can remove our contact by sending a DELETE request to the same address — that is, http://localhost:3030/contacts/<_id>. This is the link for the above example. Pressing Send will delete the contact.

Postwoman is an very versatile tool and I encourage you to use it to satisfy yourself that your API is working as expected, before moving on to the next step.

The post Build a Node.js CRUD App Using React and FeathersJS appeared first on SitePoint.

How to Debug a Node.js Application: Tips, Tricks and Tools

Mar 25, 2020

Description:

Two figures in protective suits with weapons, battling a giant spider-like bug

Software development is complex and, at some point, your Node.js application will fail. If you’re lucky, your code will crash with an obvious error message. If you’re unlucky, your application will carry on regardless but not generate the results you expect. If you’re really unlucky, everything will work fine until the first user discovers a catastrophic disk-wiping bug.

What is Debugging?

Debugging is the black art of fixing software defects. Fixing a bug is often easy — a corrected character or additional line of code solves the problem. Finding that bug is another matter, and developers can spend many unhappy hours trying to locate the source of an issue. Fortunately, Node.js has some great tools to help trace errors.

Terminology

Debugging has its own selection of obscure jargon, including the following:

Term Explanation breakpoint the point at which a debugger stops a program so its state can be inspected debugger a tool which offers debugging facilities such as running code line by line to inspect internal variable states feature as in the claim: “it’s not a bug, it’s a feature”. All developers say it at some point during their career frequency how often or under what conditions a bug will occur it doesn’t work the most-often made but least useful bug report log point an instruction to a debugger to show the value of a variable at a point during execution logging output of runtime information to the console or a file logic error the program works but doesn’t act as intended priority where a bug is allocated on a list of planned updates race condition hard-to-trace bugs dependent the sequence or timing of uncontrollable events refactoring rewriting code to help readability and maintenance regression re-emergence of a previously fixed bug perhaps owing to other updates related a bug which is similar or related to another reproduce the steps required to cause the error RTFM error user incompetence disguised as a bug report, typically followed by a response to “Read The Flipping Manual” step into when running code line by line in a debugger, step into the function being called step out when running line by line, complete execution of the current function and return to the calling code step over when running line by line, complete execution of a command without stepping into a function it calls severity the impact of a bug on system. For example, data loss would normally be considered more problematic than a UI issue unless the frequency of occurrence is very low stack trace the historical list of all functions called before the error occurred syntax error typographical errors, such as console.lug() user error an error caused by a user rather than the application, but may still incur an update depending on that person’s seniority watch a variable to examine during debugger execution watchpoint similar to a breakpoint, except the program is stopped when a variable is set to a specific value How to Avoid Bugs

Bugs can often be prevented before you test your application …

Use a Good Code Editor

A good code editor will offer numerous features including line numbering, auto-completion, color-coding, bracket matching, formatting, auto-indentation, variable renaming, snippet reuse, object inspection, function navigation, parameter prompts, refactoring, unreachable code detection, suggestions, type checking, and more.

Node.js devs are spoiled for choice with free editors such as VS Code, Atom, and Brackets, as well as plenty of commercial alternatives.

Use a Code Linter

A linter can report code faults such as syntax errors, poor indentation, undeclared variables, and mismatching brackets before you save and test your code. The popular options for JavaScript and Node.js include ESLint, JSLint, and JSHint.

These are often installed as global Node.js modules so you can run checks from the command line:

eslint myfile.js

However, most linters have code editor plugins, such as ESLint for VS Code and linter-eslint for Atom which check your code as you type:

ESLint for VS Code

Use Source Control

A source control system such as Git can help safe-guard your code and manage revisions. It becomes easier to discover where and when a bug was introduced and who should receive the blame! Online repositories such as GitHub and Bitbucket offer free space and management tools.

Adopt an Issue-tracking System

Does a bug exist if no one knows about it? An issue-tracking system is used to report bugs, find duplicates, document reproduction steps, determine severity, calculate priorities, assign developers, record discussions, and track progress of any fixes.

Online source repositories often offer basic issue tracking, but dedicated solutions may be appropriate for larger teams and projects.

Use Test-driven Development

Test-driven Development (TDD) is a development process which encourages developers to write code which tests the operation of a function before it’s written — for example, is X returned when function Y is passed input Z.

Tests can be run as the code is developed to prove a function works and spot any issues as further changes are made. That said, your tests could have bugs too …

Step Away

It’s tempting to stay up all night in a futile attempt to locate the source of a nasty bug. Don’t. Step away and do something else. Your brain will subconsciously work on the problem and wake you at 4am with a solution. Even if that doesn’t happen, fresh eyes will spot that obvious missing semicolon.

Node.js Debugging: Environment Variables

Environment variables that are set within the host operating system can be used to control Node.js application settings. The most common is NODE_ENV, which is typically set to development when debugging.

Environment variables can be set on Linux/macOS:

NODE_ENV=development

Windows cmd:

set NODE_ENV=development

Or Windows Powershell:

$env:NODE_ENV="development"

Internally, an application will enable further debugging features and messages. For example:

// is NODE_ENV set to "development"? const DEVMODE = (process.env.NODE_ENV === 'development'); if (DEVMODE) { console.log('application started in development mode on port ${PORT}'); }

NODE_DEBUG enables debugging messages using the Node.js util.debuglog (see below), but also consult the documentation of your primary modules and frameworks to discover further options.

Note that environment variables can also be saved to a .env file. For example:

NODE_ENV=development NODE_LOG=./log/debug.log SERVER_PORT=3000 DB_HOST=localhost DB_NAME=mydatabase

Then loaded using the dotenv module:

require('dotenv').config(); Node.js Debugging: Command Line Options

Various command-line options can be passed to the node runtime when launching an application. One of the most useful is --trace-warnings, which outputs stack traces for process warnings (including deprecations).

Any number of options can be set, including:

--enable-source-maps: enable source maps (experimental) --throw-deprecation: throw errors when deprecated features are used --inspect: activate the V8 inspector (see below)

By way of an example, let’s try to log the crypto module’s DEFAULT_ENCODING property, which was deprecated in Node v10:

const crypto = require('crypto'); function bar() { console.log(crypto.DEFAULT_ENCODING); } function foo(){ bar(); } foo();

Now run this with the following:

node index.js

We’ll then see this:

buffer (node:7405) [DEP0091] DeprecationWarning: crypto.DEFAULT_ENCODING is deprecated.

However, we can also do this:

node --trace-warnings index.js

That produces the following:

buffer (node:7502) [DEP0091] DeprecationWarning: crypto.DEFAULT_ENCODING is deprecated. at bar (/home/Desktop/index.js:4:22) at foo (/home/Desktop/index.js:8:3) at Object.<anonymous> (/home/Desktop/index.js:11:1) at Module._compile (internal/modules/cjs/loader.js:1151:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1171:10) at Module.load (internal/modules/cjs/loader.js:1000:32) at Function.Module._load (internal/modules/cjs/loader.js:899:14) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12) at internal/main/run_main_module.js:17:47

This tells us that the deprecation warning comes from the code in line 4 (the console.log statement), which was executed when the bar function ran. The bar function was called by the foo function on line 8 and the foo function was called on line 11 of our script.

Note that the same options can also be passed to nodemon.

Console Debugging

One of the easiest ways to debug an application is to output values to the console during execution:

console.log( myVariable );

Few developers delve beyond this humble debugging command, but they’re missing out on many more possibilities, including these:

console method description .log(msg) output a message to the console .dir(obj,opt) uses util.inspect to pretty-print objects and properties .table(obj) outputs arrays of objects in tabular format .error(msg) output an error message .count(label) a named counter reporting the number of times the line has been executed .countReset[label] resets a named counter .group(label) indents a group of log messages .groupEnd(label) ends the indented group .time(label) starts a timer to calculate the duration of an operation .timeLog([label] reports the elapsed time since the timer started .timeEnd(label) stops the timer and reports the total duration .trace() outputs a stack trace (a list of all calling functions) .clear() clear the console

console.log() accepts a list of comma-separated values. For example:

let x = 123; console.log('x:', x); // x: 123

However, ES6 destructuring can offer similar output with less typing effort:

console.log({x}); // { x: 123 }

Larger objects can be output as a condensed string using this:

console.log( JSON.stringify(obj) );

util.inspect will format objects for easier reading, but console.dir() does the hard work for you.

The post How to Debug a Node.js Application: Tips, Tricks and Tools appeared first on SitePoint.

Make Working from Home Successful: Resources for Remote Teams

Mar 25, 2020

Description:

Make Working from Home Successful: Resources for Remote Teams

If you're a designer or developer, chances are you've made the shift to working from home in the past couple of weeks.

To help manage the change, we thought we'd share some of our best resources on working remotely — starting with some books …

The Art of Working Remotely

Cover: The Art of Working Remotely If you work from home, a co-working space, or coffee shop, this book is for you. Discover how to set up a quality workspace. Learn the behaviors and practices that contribute to remote worker success. You, too, can thrive in a distributed workplace.

Influencing Virtual Teams

Cover: Influencing Virtual Teams Learn the psychological secrets of persuasion that influence your remote employees to do what you need them to do. In "Influencing Virtual Teams" you'll get step-by-step tactics that you can implement straightaway with your team to improve your team's engagement and commitment to doing their work.

The Project Book

Cover: The Project Book Projects are the lifeblood of organizations, but many projects fall short of expectations because of poor project management and/or poor project sponsorship. In The Project Book, Colin D Ellis teaches you the skills and behaviors required to make your projects succeed, every time.

Articles Remote Work: Tips, Tricks and Best Practices for Success

man typing at computer Make WFH a productive, happy work experience and avoid endless hours of misery, loneliness, and frustration.

The Real Future of Remote Work is Asynchronous

image of backpacker Kate Kendall looks at how remote work has drifted from its asynchronous potential – and what we can do to get it back there.

Productive Remote Work (When Your Mental Health Says “No”)

man typing on computer on bean bag Daniel Schwarz explores the downsides of remote work and offers tips for aligning your mind and body to make remote working work for you.

How to Prepare for a Remote Job Search

woman typing on a laptop at her desk Joshua Kraus explores how to conduct a remote job search, impress remote employers, nail interviews and land a remote job that best fits your needs.

Want access to our 400+ books and courses for just $3/month?

Join SitePoint Premium today and get access to all these books, PLUS over 400 other books and courses for just $3/month for your first three months! ($9/month thereafter, cancel anytime).

Get access now for just $3!

Need a Remote Job?

Search hundreds of remote jobs on SitePoint Remote, with over 20 new jobs posted each day!

Find a Remote Job Now!

Happy Learning!

The SitePoint Team

The post Make Working from Home Successful: Resources for Remote Teams appeared first on SitePoint.

10 Projects to Get You to Your First Dev Job in 2020

Mar 24, 2020

Description:

10 Projects to Get You to Your First Dev Job in 2020

For those of you looking to break into the world of web development with your first dev job, the amount of things you are expected to know can be overwhelming: HTML, CSS, JavaScript, version control, build tools, frameworks, the command line. The list goes on …

But never fear! In this post I'd like to offer you a little guidance by outlining ten skills that will help you land your first dev job in 2020. For each skill, I’ll suggest a hands-on project to get you started and point you to appropriate resources on SitePoint Premium for further reading.

Let's dive in.

1. Get to Know Your Code Editor

As a coder, you're going to be spending a lot of time in your editor of choice. That's why you should make the effort to learn what it can do and how to configure it properly. The subject of which editor to use can quickly become controversial, but if you’re just starting out, I would encourage you to check out VS Code (or VSCodium if you care about privacy).

VS Code ships with a lot of cool features, such as Emmet abbreviations, intellisense, various keyboard shortcuts and Git integration. There are also hundreds (if not thousands) of extensions that you can install to customize your workflow.

Project Idea

Install VS Code on your machine and commit to using it. Spend some time researching popular extensions for your language of choice and install at least three of these. You should also install Prettier and configure it to format your code on save, as well as ESLint, which will display JavaScript linting errors in VS Code's console. For bonus points, you can print out the keyboard shortcut reference sheet for your platform and attempt to memorize two or three shortcuts per week.

Further Reading

By way of a reference, I would recommend Visual Studio Code: End-to-End Editing and Debugging Tools for Web Developers. This up-to-date guide covers all of the essential VS Code components, including the editing features of the workspace, advanced functionality such as code refactoring and key binding, and integration with Grunt, Gulp, npm, and other external tools. Chapter Two, which introduces you to the user interface, and Chapter Nine, on working with extensions, should be of particular interest.

2. Build a Contact Form

If you’re building a web application, it's only a matter of time until you encounter HTML forms. They’re a big part of the web experience, and they can be complicated. For example, you need to make them accessible, and you need to make sure they render well on different browsers and on smaller screens. It can also be a challenge to style them consistently without breaking their usability.

Forms are a critical part of a visitor's journey on your site. Even if your visitor is sold on what you have to offer, a broken or even a badly laid out form could be enough for them to abandon ship. That means you lose the conversion.

Project Idea

Build and style a contact form. Concentrate on the alignment of the form fields, a prominent CTA, and make sure the form previews well across browsers and devices. Try to include various form controls, such as <select> elements and check boxes, while keeping the layout simple and elegant.

You might also like to upload your finished form to CodePen, an online community for testing and showcasing user-created HTML, CSS and JavaScript code snippets. When applying for a job, a well-curated CodePen account could potentially serve as a mini portfolio.

Further Reading

Form Design Patterns offers ten recipes for different kinds of forms — registration forms, booking forms, login forms and more. Learn from the pros and find out how to make your forms both engaging and accessible to all. If you're looking for a quick start with this project, I recommend reading the first part of the first chapter, which covers things such as labels, placeholders, styling and input types.

3. Become Acquainted with Client-side Validation

You won't get far as a web developer in 2020 without knowing JavaScript: it's one of the most popular programming languages in the world and, frankly, a must-have skill.

On the front end it's used for a wide variety of tasks, such as making interactive elements for web pages (sliders, maps, charts, menus, chat widgets, etc.) and generally enhancing the user experience. One rather nice feature of the language is that it can manipulate the DOM, so as to offer users instant feedback on an action without needing to reload the page. This makes web pages feel snappier and more responsive.

Project Idea

In this project, you should take the contact form you built in step two and augment it with client-side validation.

Using the correct input types will get you a lot of the way there, but also try to add some custom validation. You should display error messages in an intuitive way and avoid using alert boxes. And if all that sounds a bit too easy, why not add a field which asks a question to ensure that the user isn’t a bot.

Further Reading

If you’re new to the world of JavaScript programming, or would just like a quick and easy desk reference, then check out JavaScript: Novice to Ninja, 2nd Edition. This step-by-step introduction to coding in JavaScript will show you how to to solve real-world problems and develop richer web applications. You'll want to pay particular attention to Chapter Eight, which shows you how to use JavaScript to interact with an HTML form.

4. Make a Currency Converter Using the Fixer API

In the past, JavaScript had a reputation as being a toy language — good for menus and animations, but not a lot else. And while that might have been the case in the early 2000s, in 2020 nothing could be further from the truth.

Nowadays, entire apps are written in JavaScript. New tools and frameworks are introduced and developed at a rapid rate, and the language itself has undergone big changes since the arrival of ES2015 (aka ES6). It's important you stay abreast of these changes, and have a good idea of what JavaScript is capable of and where it fits into the web development picture as a whole.

Project Idea

Make an app that allows users to convert one currency to another. Users should enter an amount, select the actual currency, select the desired currency and then the app should fetch the exchange rate from the Fixer API. The user interface should be updated dynamically without any kind of page refresh.

Use modern JavaScript features where you can. Complete the project using either vanilla JavaScript, or a library like React to handle the UI updates.

Further Reading

JavaScript: Best Practice is a collection of articles which take a look at modern JavaScript and how the language has evolved to allow you to write clean, maintainable, and reusable code. I would recommend reading the “Anatomy of a Modern JavaScript Application”, “Using Modern JavaScript Syntax” and “Flow Control in Modern JavaScript”.

5. Design Your Own Portfolio Website

In your career as a web developer, you’ll likely find yourself working alongside a designer on the same project. And while design and development can be considered separate disciplines, having a firm grasp of the design process will ease that relationship and stand you in good stead with your colleagues.

Or perhaps you want to go it alone as a freelancer, taking projects from design to deployment. In this case, a generic-looking website won't cut it. You'll need to offer the client an eye-catching but also highly functional design that helps them achieve their business goals.

Project Idea

Design your own portfolio website — your place on the internet to present yourself and showcase your skills. Spend some time researching what makes a good portfolio design, then mock up a design of your own either with pencil and paper, or using a wireframing tool of your choice.

Design-wise, pay attention to the layout, the colors you’ll use, and the typography. Content-wise, consider which pages you’ll need (hint: you could include a contact form) and how to present both yourself and your work. There are lots of sites around the Internet that will give you tips on what to include.

Further Reading

Ok, I get it. Design is hard. But it doesn't need to be …

The Principles of Beautiful Web Design is a fantastic book if you’re struggling with the design process. It will walk you through an example design, from concept to completion, teaching you a host of practical skills along the way.

Start in Chapter One by reading about what makes good design and take it from there. Personally, I read the book from cover to cover in the course of a week, but you could also dip into the other chapters and learn about layout, color, texture, typography and imagery at your leisure.

The post 10 Projects to Get You to Your First Dev Job in 2020 appeared first on SitePoint.

How to Build and Structure a Node.js MVC Application

Mar 23, 2020

Description:

Inside the monitor a puppet manipulates on-screen windows and popups

In a non-trivial application, the architecture is as important as the quality of the code itself. We can have well-written pieces of code, but if we don’t have good organization, we’ll have a hard time as the complexity increases. There’s no need to wait until the project is half-way done to start thinking about the architecture; the best time is before starting, using our goals as beacons for our choices.

Node.js doesn’t have a de facto framework with strong opinions on architecture and code organization in the same way that Ruby has the Rails framework, for example. As such, it can be difficult to get started with building full web applications with Node.

In this tutorial, we’re going to build the basic functionality of a note-taking app using the MVC architecture. To accomplish this, we’re going to employ the Hapi.js framework for Node.js and SQLite as a database, using Sequelize.js, plus other small utilities, to speed up our development. We’re going to build the views using Pug, the templating language.

What is MVC?

Model-View-Controller (or MVC) is probably one of the most popular architectures for applications. As with a lot of other cool things in computer history, the MVC model was conceived at PARC for the Smalltalk language as a solution to the problem of organizing applications with graphical user interfaces. It was created for desktop applications, but since then, the idea has been adapted to other mediums including the Web.

We can describe the MVC architecture in simple terms:

Model: the part of our application that will deal with the database or any data-related functionality. View: everything the user will see — basically, the pages that we’re going to send to the client. Controller: the logic of our site, and the glue between models and views. Here we call our models to get the data, then we put that data on our views to be sent to the users.

Our application will allow us to create, view, edit and delete plain-text notes. It won’t have other functionality, but because we’ll have a solid architecture already defined we won’t have a lot of trouble adding things later.

This tutorial assumes you have a recent version of Node installed on your machine. If this isn’t the case, please consult our tutorial on getting up and running with Node.

You can check out the final application in the accompanying GitHub repository, so you get a general overview of the application structure.

Laying out the Foundation

The first step when building any Node.js application is to create a package.json file, which is going to contain all of our dependencies and scripts. Instead of creating this file manually, npm can do the job for us using the init command:

mkdir notes-board cd notes-board npm init -y

After the process is complete, we’ll have a package.json file ready to use.

Note: if you’re not familiar with these commands, checkout our Beginner’s Guide to npm.

We’re going to proceed to install Hapi.js — the framework of choice for this tutorial. It provides a good balance between simplicity, stability and features that will work well for our use case (although there are other options that would also work just fine).

npm install @hapi/hapi@18.4.0

This command will download Hapi.js and add it to our package.json file as a dependency.

Note: We’ve specified v18.4.0 of Hapi.js, as it’s compatible with Node versions 8, 10, and 12. If you’re using Node 12, you can opt to install the latest version (Hapi v19.1.0).

Now we can create our entry file — the web server that will start everything. Go ahead and create a server.js file in your application directory and add the following code to it:

"use strict"; const Hapi = require("@hapi/hapi"); const Settings = require("./settings"); const init = async () => { const server = new Hapi.Server({ port: Settings.port }); server.route({ method: "GET", path: "/", handler: (request, h) => { return "Hello, world!"; } }); await server.start(); console.log(`Server running at: ${server.info.uri}`); }; process.on("unhandledRejection", err => { console.log(err); process.exit(1); }); init();

This is going to be the foundation of our application.

First, we indicate that we’re going to use strict mode, which is a common practice when using the Hapi.js framework.

Next, we include our dependencies and instantiate a new server object where we set the connection port to 3000 (the port can be any number above 1023 and below 65535).

Our first route for our server will work as a test to see if everything is working, so a “Hello, world!” message is enough for us. In each route, we have to define the HTTP method and path (URL) that it will respond to, and a handler, which is a function that will process the HTTP request. The handler function can take two arguments: request and h. The first one contains information about the HTTP call, and the second will provide us with methods to handle our response to that call.

Finally, we start our server with the server.start() method.

Storing Our Settings

It’s good practice to store our configuration variables in a dedicated file. This file exports a JSON object containing our data, where each key is assigned from an environment variable — but without forgetting a fallback value.

In this file, we can also have different settings depending on our environment (such as development or production). For example, we can have an in-memory instance of SQLite for development purposes, but a real SQLite database file on production.

Selecting the settings depending on the current environment is quite simple. Since we also have an env variable in our file which will contain either development or production, we can do something like the following to get the database settings:

const dbSettings = Settings[Settings.env].db;

So dbSettings will contain the setting of an in-memory database when the env variable is development, or will contain the path of a database file when the env variable is production.

Also, we can add support for a .env file, where we can store our environment variables locally for development purposes. This is accomplished using a package like dotenv for Node.js, which will read a .env file from the root of our project and automatically add the found values to the environment.

Note: if you decide to also use a .env file, make sure you install the package with npm install dotenv and add it to .gitignore so you don’t publish any sensitive information.

Our settings.js file will look like this:

// This will load our .env file and add the values to process.env, // IMPORTANT: Omit this line if you don't want to use this functionality require("dotenv").config({ silent: true }); module.exports = { port: process.env.PORT || 3000, env: process.env.NODE_ENV || "development", // Environment-dependent settings development: { db: { dialect: "sqlite", storage: ":memory:" } }, production: { db: { dialect: "sqlite", storage: "db/database.sqlite" } } };

Now we can start our application by executing the following command and navigating to http://localhost:3000 in our web browser:

node server.js

Note: this project was tested on Node v12.15.0. If you get any errors, ensure you have an updated installation.

Defining the Routes

The definition of routes gives us an overview of the functionality supported by our application. To create our additional routes, we just have to replicate the structure of the route that we already have in our server.js file, changing the content of each one.

Let’s start by creating a new directory called lib in our project. Here we’re going to include all the JS components.

Inside lib, let’s create a routes.js file and add the following content:

"use strict"; const Path = require("path"); module.exports = [ // we’re going to define our routes here ];

In this file, we’ll export an array of objects that contain each route of our application. To define the first route, add the following object to the array:

{ method: "GET", path: "/", handler: (request, h) => { return "All the notes will appear here"; }, config: { description: "Gets all the notes available" } },

Our first route is for the home page (/), and since it will only return information, we assign it a GET method. For now, it will only give us the message “All the notes will appear here”, which we’re going to change later for a controller function. The description field in the config section is only for documentation purposes.

Then, we create the four routes for our notes under the /note/ path. Since we’re building a CRUD application, we’ll need one route for each action with the corresponding HTTP methods.

Add the following definitions next to the previous route:

{ method: "POST", path: "/note", handler: (request, h) => { return "New note"; }, config: { description: "Adds a new note" } }, { method: "GET", path: "/note/{slug}", handler: (request, h) => { return "This is a note"; }, config: { description: "Gets the content of a note" } }, { method: "PUT", path: "/note/{slug}", handler: (request, h) => { return "Edit a note"; }, config: { description: "Updates the selected note" } }, { method: "GET", path: "/note/{slug}/delete", handler: (request, h) => { return "This note no longer exists"; }, config: { description: "Deletes the selected note" } }

We’ve done the same as in the previous route definition, but this time we’ve changed the method to match the action we want to execute.

The only exception is the delete route. In this case, we’re going to define it with the GET method rather than DELETE and add an extra /delete in the path. This way, we can call the delete action just by visiting the corresponding URL.

Note: if you plan to implement a strict REST interface, then you would have to use the DELETE method and remove the /delete part of the path.

We can name parameters in the path by surrounding the word in curly braces. Since we’re going to identify notes by a slug, we add {slug} to each path, with the exception of the POST route; we don’t need it there because we’re not going to interact with a specific note, but to create one.

You can read more about Hapi.js routes on the official documentation.

Now, we have to add our new routes to the server.js file. Let’s import the routes file at the top of the file:

const Routes = require("./lib/routes");

Then let’s replace our current test route with the following:

server.route(Routes);

The post How to Build and Structure a Node.js MVC Application appeared first on SitePoint.

Need to Edutain Your Kids While Distancing? Here Are Our Suggestions

Mar 23, 2020

Description:

We've found ourselves with more kids at home, and not enough productive and educational activities to do while schools ramp up their online learning capabilities. So we took to our library to see what we could dig up to keep them engaged with learning, and us able to focus on doing our own remote work.

Are you also looking for a way to educate and entertain your kids while they're stuck at home?

We've put together a list of fun books that'll keep your kids occupied and teach them valuable skills at the same time!

Starting with…

The post Need to Edutain Your Kids While Distancing? Here Are Our Suggestions appeared first on SitePoint.

20 Essential React Tools for 2020

Mar 18, 2020

Description:

20 Essential React Tools for 2020

The React ecosystem has evolved into a growing list of dev tools and libraries. The plethora of tools is a true testament to its popularity. For devs, it can be a dizzying exercise to navigate this maze that changes at neck-breaking speed. To help navigate your course, below is a list of essential React tools for 2020.

Hooks website: reactjs.org/docs/hooks-intro.html repository: github.com/facebook/react GitHub stars: 140,000+ developer: Facebook version: 16.8 contributors: 1,300+

Hooks are a new addition to React as of version 16.8. They unlock useful features in classless components. With Hooks, React no longer needs lifecycle methods such as componentDidMount to manage state. This encourages separation of concerns because components are not managing their own state. Putting a lot of state management inside class components blows up complexity. This makes stateful components harder to maintain. Hooks attempt to alleviate this problem by providing key features.

The following basic Hooks are available:

useState: for mutating state in a classless component without lifecycle methods useEffect: for executing functions post-render, useful for firing Ajax requests useContext: for switching component context data, even outside component props

Pros:

mitigates state management complexity supports functional components encourages separation of concerns

Cons:

context data switching can exponentiate cognitive load Functional Components website: reactjs.org repository: github.com/facebook/react GitHub stars: 140,000+ developer: Facebook current version: 16.12 contributors: 1,300+

Functional components are a declarative way to create JSX markup without class components. They embrace the functional paradigm because they don’t manage state in lifecycle methods. This emphasizes focus on the UI markup without much logic. Because the component relies on props it becomes easier to test. Props have a one-to-one relationship with the rendered output.

This is what a functional component looks like in React:

const SimpleComponent = ({isInit, data}) => <> {useEffect(() => {!isInt && loadAjaxData()})} {data} </>

Pros:

focuses on the UI only testable component less cognitive load when thinking about the component

Cons:

no lifecycle methods Create React App website: create-react-app.dev repository: github.com/facebook/create-react-app GitHub stars: 76,000+ developer: Facebook current version: 3.4 contributors: 700+

The quintessential tool to fire up a new React project. This manages all React dependencies via a single npm package. No more dealing with Babel, webpack, and whatnot. The entire dependency tool chain gets upgraded with react-scripts in package.json. There’s a way to integrate Create React App with any server-side rendering tool out there. The tool outputs index.html and static assets in the public folder. This public folder is the touch point where static assets are ready for integration.

It’s easy to get started:

npx create-react-app my-killer-app

And it's even easier to upgrade later:

npm i react-scripts@latest

Pros:

easy to get started easy to upgrade single meta-dependency

Cons:

no server-side rendering, but allows for integration Proxy Server website: create-react-app.dev/docs/proxying-api-requests-in-development repository: github.com/facebook/create-react-app GitHub stars: 76,000+ developer: Facebook version: 0.2.3 contributors: 700+

Starting from version react-scripts@0.2.3 or higher, it’s possible to proxy API requests. This allows the back-end API and local Create React App project to co-exist. From the client side, making a request to /my-killer-api/get-data routes the request through the proxy server. This seamless integration works both in local dev and post-build. If local dev runs on port localhost:3000, then API requests go through the proxy server. Once you deploy static assets, then it goes through whatever back end hosts these assets.

To set a proxy server in package.json:

"proxy": "http://localhost/my-killer-api-base-url"

If the back-end API is hosted with a relative path, set the home page:

"homepage": "/relative-path"

Pros:

seamless integration with back-end API eliminates CORS issues easy set up

Con

might need a server-side proxy layer with multiple APIs PropTypes website: npmjs.com/package/prop-types repository: github.com/facebook/prop-types GitHub stars: 3,000+ developer: Facebook current version: 15.7.2 contributors: 35+

Declares the type intended for the React component and documents its intent. This shows a warning in local dev if the types don’t match. It supports all JavaScript primitives such as bool, number, and string. It can document which props are required via isRequired.

For example:

import PropTypes; MyComponent.propTypes = { boolProperty: PropTypes.bool, numberProperty: PropTypes.number, requiredProperty: PropTypes.string.isRequired };

Pros:

documents component’s intent shows warnings in local dev supports all JavaScript primitives

Cons:

no compile type checking TypeScript website: typescriptlang.org repository: github.com/facebook/prop-types GitHub stars: 58,000+ developer: Microsoft current version: 3.7.5 contributors: 400+

JavaScript that scales for React projects with compile type checking. This supports all React libraries and tools with type declarations. It’s a superset of JavaScript, so it’s possible to opt out of the type checker. This both documents intent and fails the build when it doesn’t match. In Create React App projects, turn it on by passing in --template typescript. TypeScript support is available starting from version react-script@2.1.0.

To declare a prop type:

interface MyComponentProps { boolProp?: boolean; // optional numberProp?: number; // optional requiredProp: string; }

Pros:

compile type checking supports all React tools and libraries, including Create React App nice way to up your JavaScript skills

Cons:

has a learning curve, but opt out is possible Redux website: redux.js.org repository: github.com/reduxjs/redux GitHub stars: 52,000+ developer: The Moon current version: 4.0.5 contributors: 700+

Predictable state management container for JavaScript apps. This tool comes with a store that manages state data. State mutation is only possible via a dispatch message. The message object contains a type that signals to the reducer which mutation to fire. The recommendation is to keep everything in the app in a single store. Redux supports multiple reducers in a single store. Reducers have a one-to-one relationship between input parameters and output state. This makes reducers pure functions.

A typical reducer that mutates state might look like this:

const simpleReducer = (state = {}, action) => { switch (action.type) { case 'SIMPLE_UPDATE_DATA': return {...state, data: action.payload}; default: return state; } };

Pros:

predictable state management multiple reducers in a single store reducers are pure functions

Cons:

set up from scratch can be a bit painful React-Redux website: react-redux.js.org repository: github.com/reduxjs/redux GitHub stars: 18,500+ developer: The Moon current version: 7.1.3 contributors: 190+

Official React bindings for Redux, it comes in two main modules: Provider and connect. The Provider is a React component with a store prop. This prop is how a single store hooks up to the JSX markup. The connect function takes in two parameters: mapStateToProps, and mapDispatchToProps. This is where state management from Redux ties into component props. As state mutates, or dispatches fire, bindings take care of setting state in React.

This is how a connect might look:

import { bindActionCreators } from 'redux'; import { connect } from 'react-redux'; const mapStateToProps = (state) => state.simple; const mapDispatchToProps = (dispatch) => bindActionCreators({() => ({type: 'SIMPLE_UPDATE_DATA'})}, dispatch); connect(mapStateToProps, mapDispatchToProps)(SimpleComponent);

Pros:

official React bindings for Redux binds with JSX markup connects components to a single store

Cons:

learning curve is somewhat steep Redux-Thunk website: npmjs.com/package/redux-thunk repository: github.com/reduxjs/redux-thunk GitHub stars: 14,000+ developer: The Moon current version: 2.3.0 contributors: 40+

Thunk middleware for Redux to make asynchronous API calls. It defers execution behind a thunk to unlock asynchrony. A thunk is a function that defers evaluation. For example, () => 1 + 1, because it doesn’t have immediate execution. This comes with niceties, like access to store state, and dispatch. Optional parameters are also supported in the thunk.

For example:

const loadData = () => async (dispatch, getState, optionalAsyncTool) => { const state = getState(); const response = await optionalAsyncTool.get('/url/' + state.data); dispatch({type: 'SIMPLE_LOAD_DATA', payload: response.data}); };

Pros:

quintessential tool for asynchrony access to state, and dispatch configurable with optional parameter

Cons:

at first, usefulness is not super clear Redux-Logger website: npmjs.com/package/redux-logger repository: github.com/LogRocket/redux-logger GitHub stars: 5,000+ developer: Log Rocket current version: 2.0.4 contributors: 40+

Logger for Redux that captures any dispatches going through the store. Each dispatch shows in the dev console in a log message. It allows drilling into prev, and next state. The action in the dispatch is also available for payload inspection. This logger is useful in local dev and can be ripped out post-build.

The following is a potential setup in Redux middleware:

import { createStore } from 'redux'; let middleware = []; if (process.env.NODE_ENV === 'development') { // rip out post-build const {logger} = require('redux-logger'); middleware.push(logger); } export default () => createStore({}, applyMiddleware(...middleware));

Pros:

good Redux insight captures all dispatches in the store can run in local dev only

Cons:

tricky to filter out unwanted messages

The post 20 Essential React Tools for 2020 appeared first on SitePoint.

15 Easy-to-Use Tools & Services to Improve Your Workflow

Mar 18, 2020

Description:

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

There's no lack of tools and services out there that you could put to use to improve your products or your business. Also, to increase your productivity, or save you time and money.

It's quite the opposite in fact. There are so many that finding one or more that will satisfy your most urgent needs can be a problem in itself. Some of the tools you use or services you subscribe to today may have served you well, but there's no guarantee they will continue to do so.

The very nature of web design demands that you keep up with the latest trends to stay abreast or keep a step ahead of the competition. That often requires finding new tools or services that will enable you to do so.

We hope this list of top tools and services for 2020 will serve to help you stay on top of your game and increase your productivity as well.

The post 15 Easy-to-Use Tools & Services to Improve Your Workflow appeared first on SitePoint.

The Rise of the No-Code Movement

Mar 17, 2020

Description:

In the internet age, technological innovation has largely been driven by a community of software engineers, web developers, and hardware hackers. Until recently, acclaimed startup accelerator Y Combinator only accepted founding teams with technical backgrounds. Furthermore, the most valuable companies of today are tech-enabled, so there’s been a focus on tech talent for future-proofing economies. Coding education provider Lambda School has raised close to $50M to close this skills gap and there are many other courses teaching the next generation to code.

But what if coding was no longer vital to success in tech? Enter the world of no-code development platforms (NCDPs).

Over the past couple of years, the rise of the no-code movement has started to change the landscape of tech. Ironically, Lamdba School itself is a product of the no-code movement, building its MVP (that has served 3,000 students) using a combination of tools such as Typeform, Airtable, and Retool. The no-code movement has also been called low code or visual development. The makers of no-code platforms are still discussing the best label for the movement but for now, I will stick with ‘no-code’.

John Everhard summarizes no-code software on Forbes as a visual integrated development environment (IDE). “Within this environment, users (aka the citizen developer) drag-and-drop application components, connect them together and create a mobile or web app. Using this software, staff can design and build powerful applications that can scale for any organization—without writing any code.” All in all, users don’t need to understand code to be able to create an app and therein lies its power.

Benefits of No-Code

When we were in the thick of product development for CloudPeeps, I remember how frustrating it was for our developers (and me!) when I needed to make any updates to our marketing pages or funnels. While I can happily edit HTML, having a custom-built platform meant deploying changes was limited to the devs. Prioritizing feature development alongside bug fixes and tweaks was a challenge. Progress was slow. We tried numerous A/B testing tools and moving some pages to popular CMSs, but the user experience started to suffer. While tools like Webflow existed then, they weren’t touted as mainstream solutions so we lacked the awareness to implement them.

Since then, the rise of no-code tools has changed the game forever – not only for tasks like marketing pages but also for full-stack apps, which people can now build end-to-end. In 2014, if you wanted to build a marketplace, you had to develop it from scratch. After that, offerings like Sharetribe came along with standard marketplace software in a box with an ongoing price tag. Now, you can build whatever marketplace set-up you like using no-code tools.

Evidently, saving time and money are two crucial benefits of no-code. Co-founder of Tiny product studio and indie investment fund Andrew Wilkinson recently tweeted: “I used to spend $25k-$100k building an app over 3-6 months. It was frustrating, expensive, and slow. Then I started using NoCode tools like Webflow, Bubble, Zapier, and Airtable. Suddenly I was able to build my app idea in days instead of months, at a fraction of the cost. Craziest of all, I could tweak and maintain it myself instead of hiring expensive devs.”. He likens ‘native-code’ to being a bulldozer: great to use when you need to build something sizeable and commercial grade. He compares ‘no-code’ to a pickup truck: powerful enough to help you get most simple and intermediate projects done.

Why Now?

The tech industry has been increasingly criticized over the past decade for its lack of diversity and inclusion. Silicon Valley has bred a generation of founders who look like each other, talk like each other, and solve similar problems – and as these people gather further wealth through exits and investments, the cycle of sameness repeats. Teaching people to code and funding different founders is creating slow change, but the no-code movement has the potential to exponentially change the face of tech. If you no longer need access to engineers or capital to launch a product, anyone can have a crack at their idea. Perhaps it is the demand for the democratization of tech that has catapulted the no-code movement into the now.

The post The Rise of the No-Code Movement appeared first on SitePoint.

Managing Dates and Times Using Moment.js

Mar 16, 2020

Description:

Working with dates and times has always been a bit cumbersome. I've always thought that a JavaScript library for manipulating dates would be quite helpful. It was only recently that I was introduced to Moment.js, the awesome JavaScript library for validating, parsing, and manipulating dates and times.

Getting Started with Moment.js

Moment.js is freely available for download from the project's home page. Moment.js can be run from the browser as well as from within a Node.js application. In order to use it with Node, install the module using the following command.

[code]
npm install moment
[/code]

Then, simply require() and use it in your application as shown below.

[js]
var moment = require('moment');

moment().format();
[/js]

In order to run Moment from the browser, download the script and include it using a script tag, as shown in the following example. Moment.js creates a global moment object which can be used to access all the date and time parsing and manipulation functionality.

[html]



moment().format();




[/html]

Date Formatting

In the past, I recall converting date strings into Date objects, grabbing individual pieces of data, and then performing string concatentations. Moment.js has simplified the process of date conversion to any particular format. Date format conversion with Moment is simple, as shown in the following example.

[js]
moment().format('YYYY MM DD');
[/js]

moment() gives the current date and time, while format() converts the current date and time to the specified format. This example formats a date as a four digit year, followed by a space, followed by a two digit month, another space, and a two digit date. You can see this code in action by checking out this demo.

Date Validation

Another annoying task that Moment.js has simplified is date validation. In order to perform validation, simply pass a date string to the moment object and call the isValid() method. This method returns true if the date is valid, and false otherwise. An example of this is shown below, along with this accompanying demo.

[js]
var dateEntered = $('#txtEnteredDate').val();

if (!moment(dateEntered,'MM-DD-YYYY').isValid()) {
console.log('Invalid Date');
} else {
console.log('Valid Date');
}
[/js]

There are a number of other helpful flags in the object returned by moment():

overflow - This is set when an overflow occurs. An example would be the 13th month or 32nd day. invalidMonth - Set when the month is invalid, like Jannnuaarry. empty - Set when the entered date contains nothing parsable. nullInput - Set when the entered date is null. Manipulating Dates

There are a number of options for manipulating the moment object. For example, you can add or subtract days, months, years, etc. This is achieved via the add() and subtract() methods. The following example shows how seven days, months, or weeks are added to the current date.

[js]
moment().add('days', 7); // adds 7 days to current date
moment().add('months', 7); // adds 7 months to current date
moment().add('years', 7); // adds 7 years to current date
[/js]

Similarly, the subtract() method is shown below.

[js]
moment().subtract('days', 7); // subtracts 7 days to current date
moment().subtract('months', 7); // subtracts 7 months to current date
moment().subtract('years', 7); // subtracts 7 years to current date
[/js]

Time From Now

Another common task is determining how much time exists between two dates. For calculating time from the current date, Moment.js uses a method named fromNow(). Here is a sample which checks how much time exists from the current time:

[js]
moment().fromNow();
[/js]

This code sample displays "a few seconds ago." If we supply a date to the moment object it would display the time range from now as per the difference. For example, the following code displays "7 days ago."

The post Managing Dates and Times Using Moment.js appeared first on SitePoint.

How to Migrate a React App to TypeScript

Mar 13, 2020

Description:

When I first started learning TypeScript, one of the suggestions I often heard was, "convert one of your existing projects! It's the best way to learn!" Soon after, a friend from Twitter offered to do just that — show me how to migrate a React app to TypeScript.

The purpose of this article is to be that friend for you and help you migrate your own project to TypeScript. For context, I will be using pieces from a personal project which I migrated while going through this process myself.

The Plan

To make this process feel less daunting, we'll break this down into steps so that you can execute the migration in individual chunks. I always find this helpful when taking on a large task. Here are all the steps we'll take to migrate our project:

Add TypeScript Add tsconfig.json Start simple Convert all files Increase strictness Clean it up Celebrate

NOTE: the most important step in this whole process is number 9. Although we can only get there by working through them in sequential order.

1. Add TypeScript to the Project

First, we need to add TypeScript to our project. Assuming your React project was bootstrapped with create-react-app, we can follow the docs and run:

npm install --save typescript @types/node @types/react @types/react-dom @types/jest

or if you're using yarn:

yarn add typescript @types/node @types/react @types/react-dom @types/jest

Notice we haven't changed anything to TypeScript yet. If we run the command to start the project locally (yarn start in my case), nothing should be different. If that's the case, then great! We're ready for the next step.

2. Add the tsconfig.json

Before we can take advantage of TypeScript, we need to configure this via the tsconfig.json. The simplest way for us to get started is to scaffold one using this command:

npx tsc --init

This gets us some basics.

We have not yet interacted with TypeScript. We have only taken the necessary actions to get things ready. Our next step is to migrate a file to TypeScript. With this, we can complete this step and move onto the next.

3. Start with a Simple Component

The beauty of TypeScript is that you can incrementally adopt it. We can start with a simple component for our first piece of this migration. For my project, I'm going to start with an SVG component that looks like this:

The post How to Migrate a React App to TypeScript appeared first on SitePoint.

Quick Tip: Configuring NGINX and SSL with Node.js

Mar 12, 2020

Description:

Quick Tip: Configuring NGINX and SSL with Node.js

NGINX is a high-performance HTTP server as well as a reverse proxy. Unlike traditional servers, NGINX follows an event-driven, asynchronous architecture. As a result, the memory footprint is low and performance is high. If you’re running a Node.js-based web app, you should seriously consider using NGINX as a reverse proxy.

NGINX can be very efficient in serving static assets. For all other requests, it will talk to your Node.js back end and send the response to the client. In this tutorial, we’ll discuss how to configure NGINX to work with Node.js. We’ll also see how to setup SSL in the NGINX server.

Note: Node also has a built-in HTTPS module and can be configured to read the necessary certificate files without the need for a reverse proxy. You can find out more about this in our article How to Use SSL/TLS with Node.js.

Installing NGINX

Assuming you already have Node.js installed on your machine (if not, check here), let’s see how to install NGINX.

Installation on Linux

If you’re running Ubuntu, you can use the following command to install NGINX:

sudo apt-get update sudo apt-get install nginx

If you’re running a Linux distro other than Ubuntu, check out the NGINX installation docs for more information.

NGINX will start automatically once it’s installed.

Installation on macOS

If you’re on macOS, you can use Homebrew to install NGINX easily. The steps are as following:

Homebrew needs the directory /usr/local to be chown’d to your username. So, run the following command in terminal first:

sudo chown -R 'username here' /usr/local

Now the following two commands will install NGINX on your system:

brew link pcre brew install nginx

Once the installation is complete, you can type the following command to start NGINX:

sudo nginx

The NGINX config file can be found here: /usr/local/etc/nginx/nginx.conf.

Installation on Windows

For Windows, head over to the NGINX downloads page and get the zip. The next step is unzipping the archive and moving to the directory in the command prompt as follows:

unzip nginx-1.3.13.zip cd nginx-1.3.13 start nginx

As you can see, the command start nginx will start NGINX.

Now that the installation is done, let’s see how you can configure a simple server.

Setting Up a Node.js Server

First, let’s create a simple Node.js server. We’ll start by initiating a project and installing the Express package:

mkdir node-demo && cd node-demo npm init -y npm i express

Create a file called server.js, with the following contents:

const express = require('express') const app = express() const port = 3000 app.get('/', (req, res) => res.send('Hello World!')) app.listen(port, () => console.log(`Example app listening on port ${port}!`))

You can start the server by running node server.js.

The post Quick Tip: Configuring NGINX and SSL with Node.js appeared first on SitePoint.

10 Git Techniques You Need to Know Before You Join a Team

Mar 11, 2020

Description:

Have you been using Git for some time but never in a team environment? Are you familiar with the basics of Git but unsure how large teams use Git at work?

In this post, I’ll talk about the basic Git techniques that you must be familiar with before you join a team. I’ve listed them in an order that you’d logically follow to contribute to a repository, as the importance of each step is paramount. Let’s now jump into the list.

1. Cloning: Getting Started in a Team

If you’ve used Git for personal projects, you may only have initialized a project from scratch and added to it over time. When you’re working on an existing codebase, the first step is to clone the codebase into your local system. This enables you to work on your copy of the repository without any interference from other changes.

To clone a repository, run the git clone command, followed by the path to the repository:

git clone /path/to/repo

If your source doesn’t reside in the same system, you can SSH to a remote system and clone too:

git clone username@remote_system_ip:/path/to/repo/on/remote

If you’re cloning from a source on the Internet, you can simply add the URL:

git clone https://github.com/sdaityari/my_git_project.git

Whenever you’re cloning a repository, you’ve the choice of multiple protocols to connect to the source. In the GitHub example above, I’ve used the https protocol.

2. Managing Remotes in Git

Once you’ve cloned your repository, it still maintains a pointer to the source. This pointer is an example of a remote in Git. A remote is a pointer to another copy of the same repository. When you clone a repository, a pointer origin is automatically created which points to the source.

You can check a list of remotes in a repository by running the following command:

git remove -v

To add a remote, you can use the git remote add command:

git remote add remote_name remote_address

You can remove a remote using the git remote remove command:

git remote remove remote_name

If you’d like to change the address of a remote, you can use the set-url command:

git remote set-url remote_name new_remote_address 3. Branching in Git

The biggest advantage of Git over other version control systems is the power of its branches. Before I jump into the essentials of branching, you may be wondering what a branch is. A branch is a pointer to a commit in your repository, which in turn points to its predecessor. Therefore, a branch represents a list of commits in chronological order. When you create a branch, you effectively create only a new pointer to a commit. However, in essence, it represents a new, independent path of development.

If you’ve been working on your own project, you may never have consciously used branches. By default, Git uses the master branch for development. Any new commits are added to this branch.

Branching is necessary for Git to bifurcate lines of work in a project. At a single time, there may be many developers who are working on a variety of different problems. Ideally, these problems are worked on in different branches to ensure logical separation of new code until code review and merge.

To check a list of branches and the current active branch, run the following command:

git branch

To create a new branch, run the following command:

git branch new_branch

Even though Git creates a new branch, notice that your active branch is still the old one. To start development in a new branch, run the following:

git checkout new_branch

To create a new branch and change the active branch, run the following command:

git checkout -b new_branch

To rename the current branch, run the following command:

git branch -m new_renamed_branch

Use the -D option to remove a branch:

git branch -D new_renamed_branch

Here’s a detailed guide on branching in Git.

4. Update your Local Repository: Merging

While we’ve checked the basics of branching in Git, the next logical step is to merge a branch into your base branch when you’ve finished working on a problem. To merge a branch, run the following command:

git checkout base_branch git merge new_branch

While it may sound like an easy process, merging is potentially the most time-consuming process in Git, as it can give rise to conflicts.

5. Handle Conflicts

Imagine that you’re working on a file in a new branch. After you commit the changes, you request Git to merge your new branch with your base branch. However, the same part of the same file in the base branch has been updated since you created the new branch. How does Git decide which changes to keep and which changes to discard?

Git always tries to not lose any data in the process of a merge. If the changes to the same file were done in different parts of the file, you could get away by keeping both sets of changes. However, if Git is unable to decide which changes to keep, it raises a conflict.

When a conflict has been raised, running git status on your repository shows a list of files that were modified in both branches being merged. If you open any file with a conflict, you’d notice the following set of lines:

<<<<<<<< HEAD ... ... ======== ... ... >>>>>>>> new_branch

The part of the file between <<<<<<<< HEAD and ======== contains that code which is present in the base branch. The lines of code between ======== and >>>>>>>> new_branch are present in the new_branch branch. The developer who’s merging the code has the responsibility to decide what part of the code (or a mix of both parts) should be included in the merge. Once edited, remove the three sets of lines shown, save the file, and commit the changes.

The post 10 Git Techniques You Need to Know Before You Join a Team appeared first on SitePoint.

Build a Native Desktop GIF Searcher App Using NodeGui

Mar 10, 2020

Description:

Build a Native Desktop GIF Searcher App Using NodeGui

NodeGui is an open-source library for building cross-platform, native desktop apps with Node.js. NodeGui apps can run on macOS, Windows, and Linux. The apps built with NodeGui are written using JavaScript, styled with CSS and rendered as native desktop widgets using the Qt framework.

Some of the features of NodeGui are:

native widgets with built-in support for dark mode low CPU and memory footprint styling with CSS including complete support for Flexbox layout complete Node.js API support and access to all Node.js compatible npm modules excellent debugging support using Chrome's DevTools first-class TypeScript support

NodeGui is powered by the Qt framework, which makes it CPU and memory efficient compared with other Chromium-based solutions such as Electron. This means that applications written using NodeGui do not open up a browser instance and render the UI in it. Instead, all the widgets are rendered natively.

This tutorial will demonstrate how to install NodeGui and use it to build a meme searcher that lives in the system tray and communicates with the GIPHY API.

The full source code for this tutorial is available on GitHub.

Installation and Basic Setup

For this tutorial it’s assumed that you have Node.js v12 or greater installed. You can confirm that both Node and npm are available by running:

# This command should print the version of Node.js node -v # This command should print the version of npm npm -v

If you need help with this step, check out our tutorial on installing Node.

Install CMake and Compilation Tools

NodeGui requires CMake and C++ compilation tools for building the native C++ layer of the project. Make sure you install CMake >= 3.1 along with a C++ compiler that supports C++11 and up. The detailed instructions are a bit different depending on your operating system.

macOS

It’s recommended to install CMake using Homebrew. Run the following commands in a terminal after installing Homebrew:

brew install cmake brew install make

You can confirm the installation by running:

# This command should print the version of CMake which should be higher than 3.1 cmake --version make --version

Lastly, you need GCC/Clang to compile C++ code. Verify that you have GCC installed using this command:

gcc --version

If you don’t have GCC installed, make sure you install Command Line Tools for Xcode or XCode Developer tools from Apple's developer page.

Windows

You can install CMake on Windows by downloading the latest release from the CMake download page.

It’s strongly recommend you use Powershell as the preferred terminal in Windows.

You can confirm the CMake installation by running:

# This command should print the version of CMake which should be higher than 3.1 cmake --version

Lastly, you need a C++ compiler. One possibility would be to install Visual Studio 2017 or higher. It’s recommended you choose the Desktop development with C++ workload during the installation process.

Linux

We’ll focus on Ubuntu 18.04 for the purposes of this tutorial. It’s recommended to install CMake using the package manager. Run the following commands in a terminal:

sudo apt-get install pkg-config build-essential sudo apt-get install cmake make

You can confirm the installation by running:

# This command should print the version of CMake which should be higher than 3.1 cmake --version make --version

Lastly, you need GCC to compile C++ code. Verify that you have GCC installed using the command:

# gcc version should be >= v7 gcc --version Hello World

In order to get started with our NodeGui meme app, we’ll clone the starter project.

Note: Running this requires Git and npm.

Open a terminal and run:

git clone https://github.com/nodegui/nodegui-starter memeapp cd memeapp npm install npm start

If everything goes well, you should see a working hello world NodeGui app on the screen.

Hello World NodeGui example

By default, the nodegui-starter project is a TypeScript project. However, in this tutorial we’ll be writing our application in JavaScript. In order to convert our starter to a JS project, we’ll make the following minor changes:

Delete the index.ts file in the src folder.

Create a new file index.js in the src directory with the following contents:

src/index.js

const { QMainWindow, QLabel } = require('@nodegui/nodegui'); const win = new QMainWindow(); win.setWindowTitle('Meme Search'); const label = new QLabel(); label.setText('Hello World'); win.setCentralWidget(label); win.show(); global.win = win;

As far as development is concerned, a NodeGui application is essentially a Node.js application. All APIs and features found in NodeGui are accessible through the @nodegui/nodegui module, which can be required like any other Node.js module. Additionally, you have access to all Node.js APIs and Node modules. NodeGui uses native components instead of web-based components as building blocks.

In the above example, we’ve imported QMainWindow and QLabel to create a native window that displays the text “Hello World”.

Now run the app again:

npm start

Hello World JavaScript version

Now that we have our basic setup ready, let's start building our meme searcher 🥳.

Note: If something doesn't work while following this tutorial, check your package.json file to ensure that the starter project has pulled in the most up-to-date version of NodeGui.

Displaying an Animated GIF

Since memes are generally animated GIFs, we’ll start by creating a basic window that displays a GIF image from a URL.

To do this, we’ll make use of QMovie along with QLabel. QMovie is not a widget but a container that can play simple animations. We’ll use it in combination with QLabel.

An example usage of QMovie looks like this:

const movie = new QMovie(); movie.setFileName('/absolute/path/to/animated.gif'); movie.start(); const animatedLabel = new QLabel(); animatedLabel.setMovie(movie);

Since, we want to load an image from a URL, we can’t use QMovie's setFileName method, which is reserved only for local files. Instead, we’ll download the GIF image using axios as a buffer and use the QMovie method loadFromData instead.

So let's start with the axios installation:

npm i axios

Now let's create a function that will take a URL as a parameter and will return a configured QMovie instance for the GIF:

async function getMovie(url) { const { data } = await axios.get(url, { responseType: 'arraybuffer' }); const movie = new QMovie(); movie.loadFromData(data); movie.start(); return movie; }

The getMovie function takes in a URL, tells axios to download the GIF as a buffer, and then uses that buffer to create a QMovie instance.

You can think of QMovie as a class that handles the inner logic of playing the GIF animation frame by frame. QMovie is not a widget, so it can't be shown on the screen as it is. Instead, we’ll use a regular QLabel instance and set QMovie to it.

Since getMovie returns a promise, we need to make some changes to the code. After some minor refactoring, we end up with the following.

src/index.js

const { QMainWindow, QMovie, QLabel } = require('@nodegui/nodegui'); const axios = require('axios').default; async function getMovie(url) { const { data } = await axios.get(url, { responseType: 'arraybuffer' }); const movie = new QMovie(); movie.loadFromData(data); movie.start(); return movie; } const main = async () => { const win = new QMainWindow(); win.setWindowTitle('Meme Search'); const label = new QLabel(); const gifMovie = await getMovie( 'https://upload.wikimedia.org/wikipedia/commons/e/e3/Animhorse.gif' ); label.setMovie(gifMovie); win.setCentralWidget(label); win.show(); global.win = win; }; main().catch(console.error);

The main function is our entry point. Here we create a window and a label. We then instantiate a QMovie instance with the help of our getMovie function, and finally set the QMovie to a QLabel.

Run the app with npm start and you should see something like this:

Basic animation example showing a galloping horse

Fetching GIFs from the GIPHY API

Giphy.com has a public API which anyone can use to build great apps that use animated GIFs. In order to use the GIPHY API, you should register at developers.giphy.com and obtain an API key. You can find further instructions here.

We’ll be using the search endpoint feature for implementing our meme search.

Let’s start by writing a searchGifs function that will take a searchTerms parameter as input and request GIFs using the above endpoint:

const GIPHY_API_KEY = 'Your API key here'; async function searchGifs(searchTerm) { const url = 'https://api.giphy.com/v1/gifs/search'; const res = await axios.get(url, { params: { api_key: GIPHY_API_KEY, limit: 25, q: searchTerm, lang: 'en', offset: 0, rating: 'pg-13' } }); return res.data.data; }

The result of the function after execution will look something like this:

[ { "type": "gif", "id": "dzaUX7CAG0Ihi", "url": "https://giphy.com/gifs/hello-hi-dzaUX7CAG0Ihi", "images": { "fixed_width_small": { "height": "54", "size": "53544", "url": "https://media3.giphy.com/media/dzaUX7CAG0Ihi/100w.gif?cid=725ec7e0c00032f700929ce9f09f3f5fe5356af8c874ab12&rid=100w.gif", "width": "100" }, "downsized_large": { "height": "220", "size": "807719", "url": "https://media3.giphy.com/media/dzaUX7CAG0Ihi/giphy.gif?cid=725ec7e0c00032f700929ce9f09f3f5fe5356af8c874ab12&rid=giphy.gif", "width": "410" }, ... }, "slug": "hello-hi-dzaUX7CAG0Ihi", ... "import_datetime": "2016-01-07 15:40:35", "trending_datetime": "1970-01-01 00:00:00" }, { type: "gif", ... }, ... ]

The result is essentially an array of objects that contain information about each GIF. We’re particularly interested in returnValue[i].images.fixed_width_small.url for each image, which contains the URL to the GIF.

Showing a List of GIFs Using the API's Response

In order to show a list of GIFs, we’ll create a getGifViews function that will:

create a QWidget container create a QMovie widget for each GIF create a QLabel from each QMovie instance attach each QLabel as a child of the QWidget container return the QWidget container

The code looks like this:

async function getGifViews(listOfGifs) { const container = new QWidget(); container.setLayout(new FlexLayout()); const promises = listOfGifs.map(async gif => { const { url, width } = gif.images.fixed_width_small; const movie = await getMovie(url); const gifView = new QLabel(); gifView.setMovie(movie); gifView.setInlineStyle(`width: ${width}`); container.layout.addWidget(gifView); }); await Promise.all(promises); container.setInlineStyle(` flex-direction: 'row'; flex-wrap: 'wrap'; justify-content: 'space-around'; width: 330px; height: 300px; `); return container; }

Let’s break this down a bit.

First, we create our container widget. QWidgets are essentially empty widgets that act as containers. They’re similar to <div> elements in the web world.

Next, in order to assign child widgets to the QWidget, we need to give it a layout. A layout dictates how the child widgets should be arranged inside a parent. Here we choose FlexLayout.

Then, we use our getMovie function to create a QMovie instance for each GIF URL. We assign the QMovie instance to a QLabel (named gifView) and give it some basic styling using the setInlineStyle method. Finally, we add the QLabel widget to the container's layout using the layout.addWidget method.

Since this is all happening asynchronously, we wait for everything to resolve using Promise.all, before setting some container styles and returning the container widget.

The post Build a Native Desktop GIF Searcher App Using NodeGui appeared first on SitePoint.

5 Projects to Help You Master Modern CSS

Mar 5, 2020

Description:

5 Projects to Help You Master Modern CSS

Many claim CSS is not a programming language. I agree — it's tougher. A mastery of CSS requires skills in design, determination, inventiveness, experience, as well as coding (especially when using preprocessors such as Sass).

CSS suggests layouts and styles to the browser. A browser can interpret those suggestions whichever way it chooses and, even then, the user or device can ignore or override any properties. Creating high-performance code which works well across all devices and screen resolutions is a challenge that few attempt or successfully complete. However, the rewards can be exhilarating.

Starting with the easiest, the following project suggestions will help you on your journey to CSS mastery using books available at SitePoint Premium.

1. Make a Site Printer-friendly

Visit a site you're working on and attempt to print (or print preview) a page. Are you happy with the results?

HTML pages are a continuous medium which do not necessarily work well on printed media. Inappropriate sections, scaling, text sizes, column dimensions, and missing or cropped content all lead to an inaccessible printing experience that few developers consider.

Fortunately, print CSS can be developed within a few hours. It's generally a matter of resetting styles (black on white), removing unnecessary sections (menus, hero images, forms, social media widgets, etc.), linearizing the layout, and reducing the paper and ink requirements.

Delve into Browser-based Developer Tools (from CSS Master) and Browser DevTool Secrets to discover how to examine and modify styles after switching to print rendering.

Applying CSS Conditionally describes how to define @media query rules including print stylesheets.

Consider your Strategy Guide to CSS Custom Properties (from New Frontiers In Web Design) to determine whether CSS variables could help with printing properties. Also consider Accessibility (from CSS Animation 101) to switch off animations or print them in the best state.

Finally, How to Create Printer-friendly Pages with CSS (from CSS Tools & Skills) provides a full print-optimization tutorial with tips to save ink and paper costs.

2. Apply Theming to an Existing Site

A single color scheme is boring! Everyone expects a dark mode option in their OS and applications, so why not add one to your website?

Until recently, theme switchers typically required an additional set of styles with JavaScript-powered switching controls. However, modern browsers make life easier with CSS Custom Properties (variables) and the prefers-color-scheme @media rule.

Strategies for Theming (from New Frontiers In Web Design) provides a range of ideas and considerations when designing your new theme.

Applying CSS Conditionally (from CSS Master) describes how to define @media query rules including prefers-color-scheme.

Finally, Modern CSS: Adding a CSS Dark Theme (from Modern CSS) provides a full dark-theme-enabling tutorial.

The post 5 Projects to Help You Master Modern CSS appeared first on SitePoint.

75 Zsh Commands, Plugins, Aliases and Tools

Mar 3, 2020

Description:

75 Zsh Commands, Plugins, Aliases and Tools

I spend a lot of my day in the terminal, and my shell of choice is Zsh — a highly customizable Unix shell that packs some very powerful features. As I’m a lazy developerTM, I’m always looking for ways to type less and to automate all the things. Luckily this is something that Zsh lends itself well to.

In this post, I’m going to share with you 75 commands, plugins, aliases and tools that will hopefully save you some keystrokes and make you more productive in your day-to-day work.

If you don't have Zsh installed on your machine, then check out this post, where I show you how to get up and running.

15 Things Zsh Can Do out of the Box

Zsh shares a lot of handy features with Bash. None of the following are unique to Zsh, but they're good to know nonetheless. I encourage you to start using the command line to perform operations such as those listed below. It might seem like more work than using a GUI at first, but once you get the hang of things, you'll never look back.

Entering cd from anywhere on the file system will bring you straight back to your home directory. Entering !! will bring up the last command. This is handy if a command fails because it needs admin rights. In this case you can type sudo !!. You can use && to chain multiple commands. For example, mkdir project && cd project && npm init -y. Conditional execution is possible using ||. For example, git commit -m "whatever..." || echo "Commit failed". Using a -p switch with the mkdir command will allow you to create parent directories as needed. Using brace expansion reduces repetition. For example, mkdir -p articles/jim/sitepoint/article{1,2,3}. Set environment variables on a per-command basis like so: NODE_DEBUG=myapp node index.js. Or, on a per-session basis like so: export NODE_DEBUG=myapp. You can check it was set by typing echo $<variable-name>. Pipe the output of one command into a second command. For example, cat /var/log/kern.log | less to make a long log readable, or history | grep ssh to search for any history entries containing "ssh". You can open files in your editor from the terminal. For example, nano ~/.zshrc (nano), subl ~/.zshrc (Sublime Text), code ~/.zshrc (VS Code). If the file doesn't exist, it will be created when you press Save in the editor. Navigation is an important skill to master. Don't just rely on your arrow keys. For example, Ctrl + a will take you to the beginning of a line. Whereas Ctrl + e will take you to the end. You can use Ctrl + w to delete one word (backw­ards). Ctrl + u will remove everything from the cursor to the beginning of the line. Ctrl + k will clear everything from the cursor to the end of the line. These last three can be undone with Ctrl + y. You can copy text with Ctrl + Shift + c. This is much more elegant than right clicking and selecting Copy. Conversely, you can paste copied text with Ctrl + shift + v.

Try to commit those key combos to memory. You'll be surprised at how often they come in handy.

15 Custom Aliases to Boost Your Productivity

Aliases are terminal shortcuts for regular commands. You can add them to your ~/.zshrc file, then reload your terminal (using source ~/.zshrc) for them to take effect.

The syntax for declaring a (simple) alias is as follows:

alias [alias-name]='[command]'

Aliases are great for often-used commands, long commands, or commands with a hard-to-remember syntax. Here are some of the ones I use on a regular basis:

A myip alias, which prints your current public IP address to the terminal: alias myip='curl http://ipecho.net/plain; echo'. A distro alias to output information about your Linux distribution: alias distro='cat /etc/*-release'. A reload alias, as I can never seem to remember how to reload my terminal: alias reload='source ~/.zshrc'. An undo-git-reset alias: alias undo-git-reset-head="git reset 'HEAD@{1}'". This reverts the effects of running git reset HEAD~. An alias to update package lists: alias sapu='sudo apt-get update'. An alias to rerun the previous command with sudo: alias ffs='sudo !!'. Because I’m lazy, I have aliased y to the yarn command: alias y='yarn'. This means I can clone a repo, then just type y to pull in all the dependencies. I learned this one from Scott Tolinski on Syntax. Not one of the ones I use, but this alias blows away the node_modules folder and removes the package-lock.json file, before reinstalling a project's dependencies: alias yolo='rm -rf node_modules/ && rm package-lock.json && yarn install'. As you probably know, yolo stands for You Only Live Once. An alias to open my .zshrc file for editing: alias zshconfig='subl $HOME/.zshrc'. An alias to update the list of Ruby versions rbenv can install: alias update-available-rubies='cd ~/.rbenv/plugins/ruby-build && git pull' An alias to kick off a server in your current directory (no npm packages required): alias server='python -m SimpleHTTPServer 8000'. You can also create an alias to open documentation in your browser: alias npmhelp='firefox https://github.com/robbyrussell/oh-my-zsh/tree/master/plugins/npm'. A global alias to pipe a command's output to less: alias -g L='| less'. You can use it like so: cat production.log L. A global alias to pipe a command’s output to grep: alias -g G='| grep'. You can use it like so: history G ssh. You can also use functions to create aliases. The following (taken from here) creates an alias that adds, commits, and pushes code to GitHub:
bash
function acp() {
git add .
git commit -m "$1"
git push
}

There are lots of places to find more ideas for aliases online. For example, this Hacker News discussion, or this post on command line productivity with Zsh.

The post 75 Zsh Commands, Plugins, Aliases and Tools appeared first on SitePoint.

A Basic HTML5 Template For Any Project

Mar 3, 2020

Description:

Louis lays the foundations for all the HTML5 goodness to come. This article is an excerpt from HTML5 & CSS3 for the Real World, by Alexis Goldstein, Louis Lazaris & Estelle Weyl.

The post A Basic HTML5 Template For Any Project appeared first on SitePoint.

How to Build a File Upload Form with Express and DropzoneJS

Mar 2, 2020

Description:

How to Build a File Upload Form with Express and DropzoneJS

Let’s face it, nobody likes forms. Developers don’t like building them, designers don’t particularly enjoy styling them, and users certainly don’t like filling them in.

Of all the components that can make up a form, the file control could just be the most frustrating of the lot. It’s a real pain to style, it’s clunky and awkward to use, and uploading a file will slow down the submission process of any form.

That’s why a plugin to enhance them is always worth a look, and DropzoneJS is just one such option. It will make your file upload controls look better, make them more user-friendly, and by using AJAX to upload the file in the background, it will at the very least make the process seem quicker. It also makes it easier to validate files before they even reach your server, providing near-instantaneous feedback to the user.

We’re going to take a look at DropzoneJS in some detail. We’ll show how to implement it. and look at some of the ways in which it can be tweaked and customized. We’ll also implement a simple server-side upload mechanism using Node.js.

As ever, you can find the code for this tutorial on our GitHub repository.

Introducing DropzoneJS

DropzoneJS allows users to upload files using drag and drop. Whilst the usability benefits could justifiably be debated, it’s an increasingly common approach and one which is in tune with the way a lot of people work with files on their desktop. It’s also pretty well supported across major browsers.

DropzoneJS isn’t simply a drag and drop based widget, however. Clicking the widget launches the more conventional file chooser dialog approach.

Here’s an animation of the widget in action:

The DropzoneJS widget in action

Alternatively, take a look at this most minimal of examples.

You can use DropzoneJS for any type of file, though the nice little thumbnail effect makes it ideally suited to uploading images in particular.

Features

To summarize some of the plugin’s features and characteristics, DropzoneJS:

can be used with or without jQuery has drag and drop support generates thumbnail images supports multiple uploads, optionally in parallel includes a progress bar is fully themeable includes extensible file validation support is available as an AMD module or RequireJS module comes in at around 43KB when minified and 13KB when gzipped Browser Support

Taken from the official documentation, browser support is as follows:

Chrome 7+ Firefox 4+ IE 10+ Opera 12+ (Version 12 for macOS is disabled because their API is buggy) Safari 6+

There are a couple of ways to handle fallbacks for when the plugin isn’t fully supported, which we’ll look at later.

Getting Set Up

The simplest way to get started with DropzoneJS is to include the latest version from a CDN. At the time of writing, this is version 5.5.1.

Alternatively, you can download the latest release from the project’s GitLab page. There’s also a third-party package providing support for ReactJS.

Then, make sure you include both the main JavaScript file and the CSS styles in your page. For example:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>File Upload Example</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/dropzone/5.5.1/min/dropzone.min.css"> </head> <body> <script src="https://cdnjs.cloudflare.com/ajax/libs/dropzone/5.5.1/min/dropzone.min.js"></script> </body> </html>

Note that the project supplies two CSS files — a basic.css file with some minimal styling, and a more extensive dropzone.css file. Minified versions of dropzone.css and dropzone.js are also available.

Basic Usage

The simplest way to implement the plugin is to attach it to a form, although you can use any HTML such as a <div>. Using a form, however, means fewer options to set — most notably the URL, which is the most important configuration property.

You can initialize it simply by adding the dropzone class. For example:

<form id="upload-widget" method="post" action="/upload" class="dropzone"></form>

Technically, that’s all you need to do, though in most cases you’ll want to set some additional options. The format for that is as follows:

Dropzone.options.WIDGET_ID = { // };

To derive the widget ID for setting the options, take the ID you defined in your HTML and camel-case it. For example, upload-widget becomes uploadWidget:

Dropzone.options.uploadWidget = { // };

You can also create an instance programmatically:

const uploader = new Dropzone('#upload-widget', options);

Next up, we’ll look at some of the available configuration options.

Basic Configuration Options

The url option defines the target for the upload form, and is the only required parameter. That said, if you’re attaching it to a form element then it’ll simply use the form’s action attribute, in which case you don’t even need to specify that.

The method option sets the HTTP method and again, it will take this from the form element if you use that approach, or else it’ll simply default to POST, which should suit most scenarios.

The paramName option is used to set the name of the parameter for the uploaded file. If you’re using a file upload form element, it will match the name attribute. If you don’t include it, it defaults to file.

maxFiles sets the maximum number of files a user can upload, if it’s not set to null.

By default, the widget will show a file dialog when it’s clicked, though you can use the clickable parameter to disable this by setting it to false, or alternatively you can provide an HTML element or CSS selector to customize the clickable element.

Those are the basic options, but let’s now look at some of the more advanced options.

Enforcing Maximum File Size

The maxFilesize property determines the maximum file size in megabytes. This defaults to a size of 1000 bytes, but using the filesizeBase property, you could set it to another value — for example, 1024 bytes. You may need to tweak this to ensure that your client and server code calculate any limits in precisely the same way.

Restricting to Certain File Types

The acceptedFiles parameter can be used to restrict the type of file you want to accept. This should be in the form of a comma-separated list of MIME types, although you can also use wildcards.

For example, to only accept images:

acceptedFiles: 'image/*', Modifying the Size of the Thumbnail

By default, the thumbnail is generated at 120x120px. That is, it’s square. There are a couple of ways you can modify this behavior.

The first is to use the thumbnailWidth and/or the thumbnailHeight configuration options.

If you set both thumbnailWidth and thumbnailHeight to null, the thumbnail won’t be resized at all.

If you want to completely customize the thumbnail generation behavior, you can even override the resize function.

One important point about modifying the size of the thumbnail is that the dz-image class provided by the package sets the thumbnail size in the CSS, so you’ll need to modify that accordingly as well.

Additional File Checks

The accept option allows you to provide additional checks to determine whether a file is valid before it gets uploaded. You shouldn’t use this to check the number of files (maxFiles), file type (acceptedFiles), or file size (maxFilesize), but you can write custom code to perform other sorts of validation.

You’d use the accept option like this:

accept: function(file, done) { if (!someCheck()) { return done('This is invalid!'); } return done(); }

As you can see, it’s asynchronous. You can call done() with no arguments and validation passes, or provide an error message and the file will be rejected, displaying the message alongside the file as a popover.

We’ll look at a more complex, real-world example later, when we look at how to enforce minimum or maximum image sizes.

Sending Additional Headers

Often you’ll need to attach additional headers to the uploader’s HTTP request.

As an example, one approach to CSRF (cross-site request forgery) protection is to output a token in the view, then have your POST/PUT/DELETE endpoints check the request headers for a valid token. Suppose you outputted your token like this:

<meta name="csrf-token" content="CL2tR2J4UHZXcR9BjRtSYOKzSmL8U1zTc7T8d6Jz">

Then, you could add this to the configuration:

headers: { 'x-csrf-token': document.querySelector('meta[name=csrf-token]').getAttributeNode('content').value, },

Alternatively, here’s the same example but using jQuery:

headers: { 'x-csrf-token': $('meta[name="csrf-token"]').attr('content') },

Your server should then verify the x-csrf-token header, perhaps using some middleware.

Handling Fallbacks

The simplest way to implement a fallback is to insert a <div> into your form containing input controls, setting the class name on the element to fallback. For example:

<form id="upload-widget" method="post" action="/upload" class="dropzone"> <div class="fallback"> <input name="file" type="file" /> </div> </form>

Alternatively, you can provide a function to be executed when the browser doesn’t support the plugin using the fallback configuration parameter.

You can force the widget to use the fallback behavior by setting forceFallback to true, which might help during development.

Handling Errors

You can customize the way the widget handles errors by providing a custom function using the error configuration parameter. The first argument is the file, the error message the second, and if the error occurred server-side, the third parameter will be an instance of XMLHttpRequest.

As always, client-side validation is only half the battle. You must also perform validation on the server. When we implement a simple server-side component later, we’ll look at the expected format of the error response, which when properly configured will be displayed in the same way as client-side errors (illustrated below).

Displaying errors with DropzoneJS

Overriding Messages and Translation

There are a number of additional configuration properties which set the various messages displayed by the widget. You can use these to customize the displayed text, or to translate them into another language.

Most notably, dictDefaultMessage is used to set the text which appears in the middle of the dropzone, prior to someone selecting a file to upload.

You’ll find a complete list of the configurable string values — all of which begin with dict — in the documentation.

Events

There are a number of events you can listen to in order to customize or enhance the plugin.

There are two ways to listen to an event. The first is to create a listener within an initialization function:

Dropzone.options.uploadWidget = { init: function() { this.on('success', function(file, resp){ ... }); }, ... };

This is the alternative approach, which is useful if you decide to create the Dropzone instance programatically:

const uploader = new Dropzone('#upload-widget'); uploader.on('success', function(file, resp){ ... });

Perhaps the most notable aspect is the success event, which is fired when a file has been successfully uploaded. The success callback takes two arguments: the first a file object, and the second an instance of XMLHttpRequest.

Other useful events include addedfile and removedfile, for when a file has been added or removed from the upload list; thumbnail, which fires once the thumbnail has been generated; and uploadprogress, which you might use to implement your own progress meter.

There are also a bunch of events which take an event object as a parameter and which you could use to customize the behavior of the widget itself — drop, dragstart, dragend, dragenter, dragover and dragleave.

You’ll find a complete list of events in the relevant section of the documentation.

The post How to Build a File Upload Form with Express and DropzoneJS appeared first on SitePoint.

How to Design for Screen Readers with Adobe XD CC

Feb 27, 2020

Description:

Designing for Screen Readers with the Help of Adobe XD CC

When it comes to accessibility, designers tend to focus on colors (i.e. contrast) and UX copy (i.e. wording), whereas developers tend to focus on ARIA attributes (i.e. code that makes websites more accessible). This is due to the fact that, often enough, thick lines are drawn between “who does what”.

Also, because creating accessible apps and websites isn’t considered to be exciting, this line is hardly ever questioned.

Accessibility is still a black sheep, even in 2020.

So, since UX copy is the responsibility of the designer and ARIA attributes are the responsibility of the developer, exactly whose responsibility is it to cater for screen readers? Since:

Screen reader UX copy is expressed as Braille or dictation (so how do we communicate this when our UI tools are visual?) Implementation is developer territory (so can we really shift the responsibility of writing UX copy to developers?)

As you can see, it’s a two-person job — and yet, the tools simply don’t exist to facilitate this. I mean, make no mistake, some aspects of accessibility design are one-sided (for example, UI designers can very easily take care of color contrast by themselves). However, other aspects such as designing for screen readers requires collaboration between designers and developers.

This is where Adobe XD CC’s design handoff and voice prototyping features come in handy. In this article, we’ll discuss what to consider when designing for screen readers, and we’ll also walk through how to use the features mentioned above.

What Are Screen Readers?

A screen reader is a type of assistive technology that communicates what’s happening on the screen (for those with visual impairments). Screen reader software can be used in combination with the keyboard (for example, users will tab and enter as opposed to using the mouse), but it can also be used in combination with screen reader hardware, which allows for more efficient navigation and also caters for users that use Braille.

If you’re an Apple user, for example, you’ll be somewhat aware of Apple VoiceOver, which is the native Apple dictation software that acts as a screen reader. Windows users, however, commonly use either JAWS or NVDA, since there aren’t any native screen reader tools in the Windows operating system.

Let’s dive in.

1. Use Headings

Screen readers often use headings as a way of deciphering a website’s structure, and if we think too visually we run the risk of leaving out these headings. In the example below, the omission of the “Chapters” heading causes screen readers to assume that the list of chapters is a continuation of the content on the left-hand side, which it obviously isn’t.

"Chapters" needs to be a heading

As a result, screen-reader users won’t be able to skip to “Chapters”, and they might not discover the information within.

While there are code workarounds available (such as the aria-label attribute), having a visible heading inclusively offers a clearer experience for everybody, whether disabled or not.

Of course, the section is very obviously a list of chapters, as we can infer from the context (i.e. the content). However, those using screen readers very rarely have the luxury of context. It’s like trying to find an object in storage where none of the boxes are labeled. Our designs need these labels and headings.

On the technical side, the rule is that every section (as defined by a <section> or <article> tag) should have not only a heading, but an explicit heading that conflicts with no other heading. As an example, if the highest level heading within a section is an <h2>, then there should be no other <h2> heading within that section. Otherwise, screen readers are clueless as to which heading is the label for the section.

The same heading for like sections

The post How to Design for Screen Readers with Adobe XD CC appeared first on SitePoint.

10 Ways to Hide Elements in CSS

Feb 26, 2020

Description:

Ten Ways to Hide Elements in CSS

There are multiple ways to hide an element in CSS, but they differ in the way they affect accessibility, layout, animation, performance, and event handling.

Animation

Some CSS hiding options are all or nothing. The element is either fully visible or fully invisible and there’s no in-between state. Others, such as transparency, can have a range of values, so interpolated CSS animations become possible.

Accessibility

Each method described below will visually hide an element, but it may or may not hide the content from assistive technologies. For example, a screen reader could still announce tiny transparent text. Further CSS properties or ARIA attributes such as aria-hidden="true" may be necessary to describe the appropriate action.

Be wary that animations can also cause disorientation, migraines, seizures, or other physical discomfort for some people. Consider using a prefers-reduced-motion media query to switch off animations when specified in user preferences.

Event Handling

Hiding will either stop events being triggered on that element or have no effect — that is, the element is not visible but can still be clicked or receive other user interactions.

Performance

After a browser loads and parses the HTML DOM and CSS object model, the page is rendered in three stages:

Layout: generate the geometry and position of each element Paint: draw out the pixels for each element Composition: position element layers in the appropriate order

An effect which only causes composition changes is noticeably smoother than those affecting layout. In some cases, the browser can also use hardware acceleration.

1. opacity and filter: opacity()

The opacity: N and filter: opacity(N) properties can be passed a number between 0 and 1, or a percentage between 0% and 100% denoting fully transparent and fully opaque accordingly.

See the Pen
hide with opacity: 0
by SitePoint (@SitePoint)
on CodePen.

There’s little practical difference between the two in modern browsers, although filter should be used if multiple effects are applied at the same time (blur, contrast, grayscale etc.)

Opacity can be animated and offers great performance, but be wary that a fully transparent element remains on the page and can trigger events.

metric effect browser support good, but IE only supports opacity 0 to 1 accessibility content not read if 0 or 0% is set layout affected? no rendering required composition performance best, can use hardware acceleration animation frames possible? yes events triggered when hidden? yes 2. color Alpha Transparency

opacity affects the whole element, but it's also possible to set the color, background-color, and border-color properties separately. Applying a zero alpha channel using rgba(0,0,0,0) or similar renders an item fully transparent:

See the Pen
hide with color alpha
by SitePoint (@SitePoint)
on CodePen.

Each property can be animated separately to create interesting effects. Note that transparency can’t be applied to elements with image backgrounds unless they're generated using linear-gradient or similar.

The alpha channel can be set with:

transparent: fully transparent (in-between animations are not possible) rgba(r, g, b, a): red, green, blue, and alpha hsla(h, s, l, a): hue, saturation, lightness, and alpha #RRGGBBAA and #RGBA metric effect browser support good, but IE only supports transparent and rgba accessibility content still read layout affected? no rendering required painting performance good, but not as fast as opacity animation frames possible? yes events triggered when hidden? yes 3. transform

The transform property can be used to translate (move), scale, rotate, or skew an element. A scale(0) or translate(-999px, 0px) off-screen will hide the element:

See the Pen
hide with transform: scale(0);
by SitePoint (@SitePoint)
on CodePen.

transform offers excellent performance and hardware acceleration because the element is effectively moved into a separate layer and can be animated in 2D or 3D. The original layout space remains as is, but no events will be triggered by a fully hidden element.

metric effect browser support good accessibility content still read layout affected? no — the original dimensions remain rendering required composition performance best, can use hardware acceleration animation frames possible? yes events triggered when hidden? no

The post 10 Ways to Hide Elements in CSS appeared first on SitePoint.

How to Prepare for a Remote Job Search

Feb 25, 2020

Description:

The number of people working remotely is at an all-time high, and that’s not just because telecommuting is pants-optional. By giving employees more control over their schedule and work environment, remote jobs can enhance the work-life balance that so many people struggle to maintain.

But if you’ve held in-house positions for most of your career, properly preparing for your remote job search can up your chances of impressing remote employers, nailing the interview, and landing a remote job that best fits your needs.

What Are Remote Employers Looking For?

Remote employers are looking for three things in particular.

Independence

The office may at times feel like a panopticonic prison, but there is something to be said for workplace accountability. Can you stay focused without a boss periodically checking in on you? Can you stay productive without the sight and sound of other co-workers clacking away on their computers? When you work from home, the Damocles of the deadline is blunted and the motivating effect of being in close proximity to your team members weakens.

Remote employers understand these challenges, which is why they look for candidates who can motivate themselves without external prompting. As trite as buzzwords like self-starter and proactive can be, they carry a significant amount of weight in the remote job search. Not only do you need to possess these qualities, you’ll need to be able demonstrate them to potential employers.

Communication

Working in an office allows employees to be more passive. Don’t know what’s going on? A co-worker can fill you in via a few seconds of conversation. Your boss is only a few steps away. Maybe there’s a whiteboard in the break room with announcements. Sharing a space with people just makes it much easier to stay in the loop.

But if you’re on your own, you need to take initiative. To compensate for the lack of face-to-face, a good remote worker will put effort into the virtual communication tools at their disposal. They’ll reach out to people through email or Slack. They’ll suggest video chats or calls to hash things out. Even swapping memes in a group chat can help you stay engaged. But if you give in to the temptation of solitude, communication could suffer, and so could your work.

Rational Thinking

When communicating primarily through text, it’s all too common for our imaginations to run wild with unfounded anxieties. Emailed your boss a question and they didn’t respond within whatever time frame you’ve arbitrarily decided was reasonable? They must think it’s a dumb question and you’re dumb for asking it. They must not deem you important enough to expediently respond to. They must be offended by something you wrote. Asked a co-worker to do something and they responded with “k”? They hate you. They’re telling everyone how much they hate you. Everyone hates you. You’re garbage!

Or … absolutely none of that is true and the coldness of non-verbal communication is messing with your head. Like any good employer, remote employers don’t want drama. They want rational critical thinkers who can vault the pitfalls of remote communication and maintain healthy work relationships. K?

How Do You Demonstrate These Skills On Your Resume?

Even if you have little to no remote work experience, there are ways to frame your in-house work experience so that it demonstrates remote work skills. What have you done that demonstrates independence? Communication? Rational thinking? Figure it out and integrate it into your resume.

For example, if you took the initiative on anything in a previous position, emphasize it. Say you independently devised and implemented project x or volunteered to plan, create, and maintain project y. Explain that you created and ran program z with little oversight.

Here are some other ideas to get you thinking:

The post How to Prepare for a Remote Job Search appeared first on SitePoint.

How to Properly Organize Files in Your Codebase & Avoid Mayhem

Feb 20, 2020

Description:

How to Properly Organize Files on a Project and Avoid Mayhem

The main library, data, UI, docs and wiki, tests, legacy and third-party components … How do we keep track and maintain order within all of this? Organizing the files in your codebase can become a daunting task.

Relax — we've got this! In this article, we’ll review the most common systems for both small and large projects, with some easy-to-follow best practices.

Why Bother?

As with pretty much all of the tasks related to project management — documentation, software commits, deployment — you’ll benefit from taking a conscious, programmatic approach. Not only it will reduce problems now, but it will also save you and your team quality time in the future when you need to quickly access and review things.

You surely can recall function names from the top of your head for whatever is it that you're coding right now, and quickly find a file you need to edit, and sharply tell what works from what doesn't — or so you think. But could you say the same about that project you were working on last year?

Let's admit it: software projects can go on spans of inactivity that last for months, and even years. A simple README file could do a lot for your colleagues or your future self. But let's think about the other ways you could structure your project, and establish some basic rules to name files, address project documentation, and to some degree organize an effective workflow that would stand the test of time.

Making Sense of Things

We’ll establish a "baseline" for organizing files in a project — a logic that will serve us for a number of situations within the scope of software development.

As with our rules for committing changes to your codebase the right way, none of this is carved in stone, and for what it's worth, you and your team might come up with different guidelines. In any case, consistency is the name of the game. Be sure you understand (and discuss or dispute) what the rules are, and follow them once you've reached a consensus.

The Mandatory Set

This is a reference list of files that nearly every software project should have:

README: this is what GitHub renders for you right under the sourcetree, and it can go a long way to explaining what the project is about, how files are organized, and where to find further information. CHANGELOG: to list what's new, modified or discontinued on every version or revision — normally in a reverse chronological order for convenience (last changes first). COPYING LICENSE: a file containing the full text of the license covering the software, including some additional copyright information, if necessary (such as third-party licenses). .gitignore: assuming you use Git (you most probably do), this will also be a must to tell what files not to sync with the repository. (See Jump Start Git's primer on .gitignore and the documentation for more info, and have a look at a collection of useful .gitignore templates for some ideas.) Supporting Actors

The post How to Properly Organize Files in Your Codebase & Avoid Mayhem appeared first on SitePoint.

Productive Remote Work (When Your Mental Health Says “No”)

Feb 19, 2020

Description:

Productive Remote Working

Remote work is not easy. It sounds like a dream (and it honestly is in a lot of ways), but there’s a darker side to remote work that one can’t understand until they’ve done it.

Here’s the deal. People that work remotely often suffer from suboptimal mental health, and so you’re probably wondering, why on earth do they do it? Well, the fact is, while remote working comes with some very unique challenges, so does not working remotely. The difference is that remote work can offer the flexibility you need to build a lifestyle that suits you.

people sitting at table with laptop

Indeed, remote work isn’t a silver bullet for burnout or wanderlust, but if you do happen to try it out and eventually wind up succumbing to loneliness, or a lack of motivation or productivity (as many remote workers do), at least you’ll have the opportunity to change things up and make things better.

In the eyes of many, it’s the lesser of two evils.

That being said, attempting to diagnose what your mind and body needs isn’t that easy. What might work one day might not work on another day, and what might work for one individual might not work for another individual. Humans are complex, and in the case of remote work, everyday productivity tricks often don’t cut it.

Let’s take a look.

“I feel lonely”

Loneliness is a big issue (maybe the biggest?) for freelance remote workers and digital nomads in foreign countries, but it can also affect those that work in distributed teams (especially when some team members aren’t remote, as one can feel like an outsider at work using this setup). Let’s look at the solutions.

Utilize co-working spaces

Co-working spaces aren’t for everyone. If you teach English, it’s obviously a no-no (not because of the noise, but because the noise would be distracting to other remote workers). If you’re only required to dive into the odd video call, though, many co-working spaces include a few hours of “booth time”.

Throw in super-fast Wi-Fi, free coffee, daily events, and a likeminded crowd, joining a co-working space is like joining a community, and some co-working spaces (such as Hubud) and Dojo Bali) are literally famous! Good vibes = a huge motivation boost.

happy co-workers sitting with laptops on comfy chairs

Work from bars and cafés

Cafés and bars work well too. The noise and seating options might be a tad unpredictable, and when going to a new place one has to find the Wi-Fi password, but all in all the experience is very much the same. It’s still fairly easy to meet other people, as it’s likely that you won’t be the only regular customer.

Pro-tip: download Wi-Fi Map app to get the Wi-Fi passwords of networks near you!

My favourite café — October Coffee Gaya, Kota Kinabalu, Malaysia)

The post Productive Remote Work (When Your Mental Health Says “No”) appeared first on SitePoint.

Forms, File Uploads and Security with Node.js and Express

Feb 19, 2020

Description:

Forms, File Uploads and Security with Node.js and Express

If you’re building a web application, you’re likely to encounter the need to build HTML forms on day one. They’re a big part of the web experience, and they can be complicated.

Typically the form-handling process involves:

displaying an empty HTML form in response to an initial GET request user submitting the form with data in a POST request validation on both the client and the server re-displaying the form populated with escaped data and error messages if invalid doing something with the sanitized data on the server if it’s all valid redirecting the user or showing a success message after data is processed.

Handling form data also comes with extra security considerations.

We’ll go through all of these and explain how to build them with Node.js and Express — the most popular web framework for Node. First, we’ll build a simple contact form where people can send a message and email address securely and then take a look what’s involved in processing file uploads.

A contact form with email and message with validation errors

As ever, the complete code can be found in our GitHub repo.

Setup

Make sure you’ve got a recent version of Node.js installed. node -v should return 8.9.0 or higher.

Download the starter code from here with Git:

git clone -b starter https://github.com/sitepoint-editors/node-forms.git node-forms-starter cd node-forms-starter npm install npm start

Note: The repo has two branches, starter and master. The starter branch contains the minimum setup you need to follow this article. The master branch contains a full, working demo (link above).

There’s not too much code in there. It’s just a bare-bones Express setup with EJS templates and error handlers:

// server.js const path = require('path'); const express = require('express'); const layout = require('express-layout'); const routes = require('./routes'); const app = express(); app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'ejs'); const middlewares = [ layout(), express.static(path.join(__dirname, 'public')), ]; app.use(middlewares); app.use('/', routes); app.use((req, res, next) => { res.status(404).send("Sorry can't find that!"); }); app.use((err, req, res, next) => { console.error(err.stack); res.status(500).send('Something broke!'); }); app.listen(3000, () => { console.log('App running at http://localhost:3000'); });

The root url / simply renders the index.ejs view:

// routes.js const express = require('express'); const router = express.Router(); router.get('/', (req, res) => { res.render('index'); }); module.exports = router; Displaying the Form

When people make a GET request to /contact, we want to render a new view contact.ejs:

// routes.js router.get('/contact', (req, res) => { res.render('contact'); });

The contact form will let them send us a message and their email address:

<!-- views/contact.ejs --> <div class="form-header"> <h2>Send us a message</h2> </div> <form method="post" action="/contact" novalidate> <div class="form-field"> <label for="message">Message</label> <textarea class="input" id="message" name="message" rows="4" autofocus></textarea> </div> <div class="form-field"> <label for="email">Email</label> <input class="input" id="email" name="email" type="email" value="" /> </div> <div class="form-actions"> <button class="btn" type="submit">Send</button> </div> </form>

See what it looks like at http://localhost:3000/contact.

Form Submission

To receive POST values in Express, you first need to include the body-parser middleware, which exposes submitted form values on req.body in your route handlers. Add it to the end of the middlewares array:

// server.js const bodyParser = require('body-parser'); const middlewares = [ // ... bodyParser.urlencoded({ extended: true }), ];

It’s a common convention for forms to POST data back to the same URL as was used in the initial GET request. Let’s do that here and handle POST /contact to process the user input.

Let’s look at the invalid submission first. If invalid, we need to pass back the submitted values to the view (so users don’t need to re-enter them) along with any error messages we want to display:

router.get('/contact', (req, res) => { res.render('contact', { data: {}, errors: {} }); }); router.post('/contact', (req, res) => { res.render('contact', { data: req.body, // { message, email } errors: { message: { msg: 'A message is required' }, email: { msg: 'That email doesn‘t look right' } } }); });

If there are any validation errors, we’ll do the following:

display the errors at the top of the form set the input values to what was submitted to the server display inline errors below the inputs add a form-field-invalid class to the fields with errors. <!-- views/contact.ejs --> <div class="form-header"> <% if (Object.keys(errors).length === 0) { %> <h2>Send us a message</h2> <% } else { %> <h2 class="errors-heading">Oops, please correct the following:</h2> <ul class="errors-list"> <% Object.values(errors).forEach(error => { %> <li><%= error.msg %></li> <% }) %> </ul> <% } %> </div> <form method="post" action="/contact" novalidate> <div class="form-field <%= errors.message ? 'form-field-invalid' : '' %>"> <label for="message">Message</label> <textarea class="input" id="message" name="message" rows="4" autofocus><%= data.message %></textarea> <% if (errors.message) { %> <div class="error"><%= errors.message.msg %></div> <% } %> </div> <div class="form-field <%= errors.email ? 'form-field-invalid' : '' %>"> <label for="email">Email</label> <input class="input" id="email" name="email" type="email" value="<%= data.email %>" /> <% if (errors.email) { %> <div class="error"><%= errors.email.msg %></div> <% } %> </div> <div class="form-actions"> <button class="btn" type="submit">Send</button> </div> </form>

Submit the form at http://localhost:3000/contact to see this in action. That’s everything we need on the view side.

Validation and Sanitization

There’s a handy middleware called express-validator for validating and sanitizing data using the validator.js library. Let’s add it to our app.

Validation

With the validators provided, we can easily check that a message and a valid email address was provided:

// routes.js const { check, validationResult, matchedData } = require('express-validator'); router.post('/contact', [ check('message') .isLength({ min: 1 }) .withMessage('Message is required'), check('email') .isEmail() .withMessage('That email doesn‘t look right') ], (req, res) => { const errors = validationResult(req); res.render('contact', { data: req.body, errors: errors.mapped() }); }); Sanitization

With the sanitizers provided, we can trim whitespace from the start and end of the values, and normalize the email address into a consistent pattern. This can help remove duplicate contacts being created by slightly different inputs. For example, ' Mark@gmail.com' and 'mark@gmail.com ' would both be sanitized into 'mark@gmail.com'.

Sanitizers can simply be chained onto the end of the validators:

// routes.js router.post('/contact', [ check('message') .isLength({ min: 1 }) .withMessage('Message is required') .trim(), check('email') .isEmail() .withMessage('That email doesn‘t look right') .bail() .trim() .normalizeEmail() ], (req, res) => { const errors = validationResult(req); res.render('contact', { data: req.body, errors: errors.mapped() }); const data = matchedData(req); console.log('Sanitized:', data); });

The matchedData function returns the output of the sanitizers on our input.

Also, notice our use of the bail method, which stops running validations if any of the previous ones have failed. We need this because if a user submits the form without entering a value into the email field, the normalizeEmail will attempt to normalize an empty string and convert it to an @. This will then be inserted into our email field when we re-render the form.

The post Forms, File Uploads and Security with Node.js and Express appeared first on SitePoint.

Use ipdata’s Geolocation Data to Protect & Customize Your Site

Feb 18, 2020

Description:

This article was created in partnership with ipdata. Thank you for supporting the partners who make SitePoint possible.

Modern websites are becoming more and more effective at customizing content based on their visitors’ location. They can redirect users to a page in their own language, display prices in the local currency, pre-fill webforms with location information, and show the current time and date for the correct timezone.

ipdata is a low-latency API that provides website owners with a wide variety of information about their visitors based on IP address (IPv4 and IPv6). Think of it as an IP geolocation and threat intelligence API.

By using a visitor’s IP address you can learn their continent, country, region, city, latitude and longitude, organization or ISP, and timezone. The API also detects Proxy and Tor users, as well as known spammers and bad bots. Blocking these risks will protect your website, and reduce the need for security strategies like CAPTCHA.

Let’s look specifically at some ways ipdata can help, and how to implement them on your own website.

Redirect Visitors and Localize Content

When you visit the ipdata website you’ll immediately see what the service is capable of. Everything that can be learned from your own IP address is displayed.

ipdata data return example

That data includes:

Whether you’re in the EU, Your city, State or region (and region code), Country (and country code), Continent (and continent code), Latitude and longitude, Postal or zip code, Country calling code, Your country’s flag emoji, Your service provider’s ASN and carrier information, Languages, Currency (name, code, symbol, plural), Time zone (name and abbreviation, offset, daylight savings time, current time), Threat information (Tor, Proxy, anonymous, known attacker, known abuser, threat, bogon).

You can call ipdata's API on each page request to geolocate your visitors and localize their content. Here’s a handful of ideas of what you can achieve:

Restrict or block access to your content to specific countries or continents, Redirect users to country-specific (or language-specific) sites or pages, Pre-fill your webforms with their location data, Show your visitors their local time and weather, Display events that are near your visitors, or available flights in their area, Serve targeted ads based on location, Enforce GDPR compliance, Automatically convert prices on your e-commerce store to their local currency, using the correct currency symbol, More accurately analyze where your traffic is coming from.

You can get a client’s IP address using JavaScript, but it’s a bit of work. Instead, use ipdata’s API. It’s super-fast and reliable across all browsers. Here’s the code:

$.get("https://api.ipdata.co?api-key=test", function(response) { console.log(response.ip); }, "jsonp");

Once you have a visitor’s API address, ipdata’s documentation shows you how to get their location in 26 different languages. You’ll also find detailed tutorials on how to code for a variety of use cases. Here are a few examples.

To block (or allow) users by country, look up the ISO 3166 ALPHA-2 Country Codes for the ones you want to blacklist or whitelist. Then follow this sample code to learn how to blacklist or whitelist them.

// List of countries we want to block // To see this in action add your country code to the array var blacklist = ['US', 'CA', 'UK', 'IN'] // Getting the country code from the user's IP $.get("https://api.ipdata.co?api-key=test", function (response) { // Checking if the user's country code is in the blacklist // You could inverse the logic here to use a whitelist instead if (blacklist.includes(response.country_code)) { alert('This content is not available at your location.'); } else { alert("You're allowed to see this!") } }, "jsonp");

Redirecting users by country is useful if you have country-specific online stores, or if you have a separate page with content in their language or with country-specific contact details.

Here’s an example of how to redirect your visitors located in Germany and Australia. They will be redirected from https://uk.store.ipdata.co to https://de.store.ipdata.co and https://au.store.ipdata.co.

// Getting the country code from the user's IP $.get("https://api.ipdata.co?api-key=test", function (response) { if (response.country_code == 'UK') { window.location.href = "https://uk.store.ipdata.co"; } else if (response.country_code == 'DE') { window.location.href = "https://de.store.ipdata.co"; } else if (response.country_code == 'AU') { window.location.href = "https://au.store.ipdata.co"; } }, "jsonp");

You can also personalize the content of your site depending on the user’s location. Here’s an example that displays a special offer to UK visitors only:

// Getting the country name from the user's IP $.get("https://api.ipdata.co?api-key=test", function (response) { if (response.country_code == 'UK') { alert("Special offer for all our users from " +response.country_name+ "!"); } }, "jsonp");

Instead of targeting a whole country, you can drill down to region, city or postal code (zip code). Alternatively, you could target a time zone or specific currency.

You can further personalize your content by displaying the user’s local time (adjusted for DST) and local currency symbol. To request time zone data for IP address “3.3.3.3”:

$ curl https://api.ipdata.co/3.3.3.3/time_zone?api-key=test

You’ll receive this response, which includes the name and abbreviation of the time zone, its UTC offset, whether it is currently DST, and the local time:

{ "name": "America/Los_Angeles", "abbr": "PDT", "offset": "-0700", "is_dst": true, "current_time": "2019-03-27T01:13:48.930025-07:00" }

Currency detection is similar. Here’s an example for the IP address “203.100.0.51”:

curl https://api.ipdata.co/203.100.0.51/currency?api-key=test

And the response:

{ "name": "Australian Dollar", "code": "AUD", "symbol": "AU$", "native": "$", "plural": "Australian dollars" } Protect Your Website from Threats

You can also use ipdata to identify potential threats against your website. They maintain a database of over 600 million malicious IP addresses, open proxies, Tor nodes, spammers, botnets, and attackers. These are aggregated only from high-quality, authoritative sources. You can use this information in a variety of ways:

Protect your comments by blocking known spammers and bad bots, alleviating the need for CAPTCHA, Detect frauds by determining if their credit card is from a country different to where they are located, Block anonymous traffic to eliminate the risks that come from such networks, Block high-risk countries, such as the countries where most of your malware and attacks originate, Prevent “free trial abuse” by detecting Proxy and Tor users.

Here’s how to access the threat data for the IP address “103.76.180.54”:

curl https://api.ipdata.co/103.76.180.54/threat?api-key=test

The request generates the following response:

{ "is_tor": true, "is_proxy": false, "is_anonymous": true, "is_known_attacker": false, "is_known_abuser": false, "is_threat": false, "is_bogon": false }

The visitor is using a Tor network. is_anonymous is true if the visitor is either a Tor or Proxy user. You can use ipdata to stop anonymous users creating an account. Here’s some sample code from the official documentation:

// Getting the anonymity status from the user's IP $.get("https://api.ipdata.co?api-key=test", function (response) { if (response.threat.is_anonymous) { alert("You are not allowed to create an account."); } }, "jsonp");

You can get more specific, for example, by blocking Proxy users but letting Tor users through:

// Getting the anonymity status from the user's IP $.get("https://api.ipdata.co?api-key=test", function (response) { if (response.threat.is_proxy) { alert("You are not allowed to create an account."); } }, "jsonp");

Some users are repeat offenders, having been repeatedly reported by admins of other websites for spam or malicious activity. You can stop them from creating an account by blocking them if one of these fields are true:

is_known_abuser: IP addresses that have been reported to be sources of spam, is_known_attacker: IPs that have been reported to be the source of malicious activity. Why Choose ipdata?

ipdata compares very favorably with other IP Geolocation APIs. It is written in Python 3 with an average execution time of 2.9 ms. It’s fast and reliable enough to keep a long list of clients happy, including Comcast, Redhat, Cooperpress, Sphero, AMD, and NASA.

ipdata is highly scalable, with low latency globally. The API serves millions of requests every day at an average speed of just ~65ms, and runs in eleven data centers around the world:

4 in the US, 1 in Canada, 2 in Europe (London and Frankfurt), 1 in India (Mumbai), 1 in South America (Sao Paulo), 1 in Europe (Seol), and 1 in Australia (Sydney).

According to Jonathan Kosgei, the Founder of ipdata, execution time is kept low by not doing any database reads or writes in the application code. “A separate authorizer function handles getting usage data from DynamoDB and authorizing users based on whether they’re within their quota or not. And its results are cached.”

Start Geolocating Your Visitors with ipdata

By now I’m sure you’ve thought of a dozen ways you can use ipdata to enhance and protect your website, or those of your clients. Sign up for free and start testing it!

The service is Jonathan Kosgei’s first SaaS, and he’s quite transparent about the smart way he set it up and the lessons he learned along the way. Learn from his experiences in his guest posts:

How to build a SaaS with $0 (Hackernoon)—here he lists the free tiers of numerous products that enabled him to build the service without a large initial outlay, How ipdata uses AWS to serve a global, highly-scalable IP geolocation API (AWS Startups Blog)—goes into more detail with a focus on AWS, How Ipdata Serves 25M API Calls From 10 Infinitely Scalable Global Endpoints For $150 A Month (High Scalability)—details how he responded to a failure on Black Friday by choosing a new tech stack.

The post Use ipdata’s Geolocation Data to Protect & Customize Your Site appeared first on SitePoint.

How to Get Involved in the Booming Python Job Market

Feb 17, 2020

Description:

How to Jump Aboard the Booming Python Job Market

From finance to artificial intelligence, data science to web development, there isn't an area in which Python isn't consolidated and flourishing. So let's discuss actual salaries, in-demand skills, marketplaces, and what to do in order to remain competitive.

Find remote jobs in tech, including Python, on SitePoint Remote.

The Job Market Today

Information technology has created an extremely varied and dynamic market, and saying "computer science" alone is something of an umbrella term now. Pretty much everything has elements of IT in it to some degree — from the algorithms that recommend which TV series you should watch, to the code on this page, and even the software integrating your home appliances with your mobile.

From this wide array of areas — all of them careers in their own right — we'll pick a handful. All of them are within multi-million/billion-dollar industries that are particularly hot as of 2020, and will most probably remain active in the foreseeable future.

We are talking about:

AI cloud development cryptocurrencies and finance data science web development and mobile apps

Nearly any position in an exciting, forward-moving and profitable industry will require Python mastery. Stack Overflow Trends very eloquently shows how Python has gained traction since 2008 to become the most talked-about programming language.

But programming alone won't cut it. You’ll also need solid knowledge specific to the industry before you'll be considered for the position.

Let's examine how Python has stacked up against other languages in each field during the last five years on Google Trends, and also discuss what additional knowledge will be expected from you.

Statistics Analysis and Deep Learning

The post How to Get Involved in the Booming Python Job Market appeared first on SitePoint.

What Is Node and When Should I Use It?

Feb 16, 2020

Description:

What Is Node and When Should I Use It?

So you’ve heard of Node.js, but aren’t quite sure what it is or where it fits into your development workflow. Or maybe you’ve heard people singing Node’s praises and now you’re wondering if it’s something you need to learn. Perhaps you’re familiar with another back-end technology and want to find out what’s different about Node.

If that sounds like you, then keep reading. In this article, I’ll take a beginner-friendly, high-level look at Node.js and its main paradigms. I’ll examine Node’s main use cases, as well as the current state of the Node landscape, and offer you a wide range of jumping off points (for further reading) along the way.

Please note that, throughout the article, I’ll use “Node” and “Node.js” interchangeably.

What Is Node.js?

There are plenty of definitions to be found online. Let’s take a look at a couple of the more popular ones. This is what the project’s home page has to say:

Node.js® is a JavaScript runtime built on Chrome's V8 JavaScript engine.

And this is what Stack Overflow has to offer:

Node.js is an event-based, non-blocking, asynchronous I/O runtime that uses Google's V8 JavaScript engine and libuv library.

Hmmm, “event-based”, “non-blocking”, “asynchronous I/O” — that’s quite a lot to digest in one go. So let’s approach this from a different angle and begin by focusing on the other detail that both descriptions mention — the V8 JavaScript engine.

Node Is Built on Google Chrome’s V8 JavaScript Engine

The V8 engine is the open-source JavaScript engine that runs in Google Chrome and other Chromium-based web browsers, including Brave, Opera, and Vivaldi. It was designed with performance in mind and is responsible for compiling JavaScript directly to native machine code that your computer can execute.

However, when we say that Node is built on the V8 engine, we don’t mean that Node programs are executed in a browser. They aren’t. Rather, the creator of Node (Ryan Dahl) took the V8 engine and enhanced it with various features, such as a file system API, an HTTP library, and a number of operating system–related utility methods.

This means that Node.js is a program we can use to execute JavaScript on our computers. In other words, it’s a JavaScript runtime.

How Do I Install Node.js?

In this next section, we’ll install Node and write a couple of simple programs. We’ll also look at npm, a package manager that comes bundled with Node.

Node Binaries vs Version Manager

Many websites will recommend that you head to the official Node download page and grab the Node binaries for your system. While that works, I would suggest that you use a version manager instead. This is a program that allows you to install multiple versions of Node and switch between them at will. There are various advantages to using a version manager. For example, it negates potential permission issues when using Node with npm and lets you set a Node version on a per-project basis.

If you fancy going the version manager route, please consult our quick tip: Install Multiple Versions of Node.js using nvm. Otherwise, grab the correct binaries for your system from the link above and install those.

“Hello, World!” the Node.js Way

You can check that Node is installed on your system by opening a terminal and typing node -v. If all has gone well, you should see something like v12.14.1 displayed. This is the current LTS version at the time of writing.

Next, create a new file hello.js and copy in the following code:

console.log("Hello, World!");

This uses Node’s built-in console module to display a message in a terminal window. To run the example, enter the following command:

node hello.js

If Node.js is configured properly, “Hello, World!” will be displayed.

Node.js Has Excellent Support for Modern JavaScript

As can be seen on this compatibility table, Node has excellent support for ECMAScript 2015 (ES6) and beyond. As you’re only targeting one runtime (a specific version of the V8 engine), this means that you can write your JavaScript using the latest and most modern syntax. It also means that you don’t generally have to worry about compatibility issues — as you would if you were writing JavaScript that would run in different browsers.

To illustrate the point, here’s a second program that makes use of several modern JavaScript features, such as tagged template literals, object destructuring and Array.prototype.flatMap():

function upcase(strings, ...values) { return values.map(name => name[0].toUpperCase() + name.slice(1)) .join(' ') + strings[2]; } const person = { first: 'brendan', last: 'eich', age: 56, position: 'CEO of Brave Software', }; const { first, last } = person; const emoticon = [ ['┌', '('], ['˘', '⌣'], ['˘', ')', 'ʃ'] ]; console.log( upcase`${first} ${last} is the creator of JavaScript! ` + emoticon.flat().join('') );

Save this code to a file called index.js and run it from your terminal using the command node index.js. You should see Brendan Eich is the creator of JavaScript! ┌(˘⌣˘)ʃ output to the terminal.

Introducing npm, the JavaScript Package Manager

As I mentioned earlier, Node comes bundled with a package manager called npm. To check which version you have installed on your system, type npm -v.

In addition to being the package manager for JavaScript, npm is also the world’s largest software registry. There are over 1,000,000 packages of JavaScript code available to download, with billions of downloads per week. Let’s take a quick look at how we would use npm to install a package.

Installing a Package Globally

Open your terminal and type the following:

npm install -g jshint

This will install the jshint package globally on your system. We can use it to lint the index.js file from the previous example:

jshint index.js

You should now see a number of ES6-related errors. If you want to fix them up, add /* jshint esversion: 6 */ to the top of the index.js file, re-run the command and linting should pass.

If you’d like a refresher on linting, see A Comparison of JavaScript Linting Tools.

The post What Is Node and When Should I Use It? appeared first on SitePoint.

Top 2020 WordPress Plugins for Supercharging Your Website

Feb 12, 2020

Description:

Top 2020 WP Plugins for Supercharging Your Website

WordPress has become far and away the most popular website-building platform of them all. It has a wealth of tools to help you design and build a professional-looking portfolio, blog, eCommerce site, or virtually any other type of website.

Nothing is perfect. Even if this remarkable web-building platform comes close, there are certain tools or capabilities it lacks — tools or capabilities that could, for example, give your website an important extra feature, or simply put it on steroids.

Help, as they say, is just around the corner — in the form of WordPress plugins.

However, there are over 55,000 of them.

Hopefully one or more of the eight premium plugins described in this article addresses a capability you’ve been searching for. In any event, we’re willing to bet that you’d like to take a few of these popular plugins for a spin. After all, they’re free to try or use.

Sounds like a winner.

1. Brizy Website Builder for WordPress

Brizy Website Builder for WordPress

There’s no shortage of good reasons to add the Brizy WordPress plugin to your web design toolkit, but before going into more detail, let's start with several really good ones.

Brizy is easy to set up, intuitive to use, and makes it easy to start a website design. You can build a page in minutes and no coding is required to do so. Brizy won’t cost you a dime. It’s free to download and use.

If that’s not quite enough to give this premium website-building plugin a try, there’s more. You get more than 500 pre-made blocks, 40 popups, and 150 layouts right out of the box — nice to have if you don’t wish to start from scratch.

In addition, the package contains 4,000 icons, all the global colors and fonts you’re ever likely to need, a popup builder, app integration and lead generation options, and plenty of helpful documentation including video tutorials.

Whether you’re a long-time WordPress user or just getting started, Brizy provides extras you’ll wish you had known about earlier.

2. wpDataTables

wpDataTables

The wpDataTables learning curve might not be steep, but it is worth every minute that you have invested in getting familiar with this plugin. That’s because this premium plugin can do so many things for you. It’s also a huge time saver, as you’ll discover if you’ve ever had to organize and manage huge amounts of data, in various formats, and from several sources.

Not to worry. The user documentation is highly detailed and easy to follow.

The wpDataTables plugin enables you to build a website that can easily manage data based on Excel and CSV files, Google spreadsheets, MySQL queries, JSON and XML feed and many more easily.

You’ll be able to build editable tables and charts featuring among other topics financial or operational statistics, complex analysis and comparison data, and extensive product catalogs.

Those tables and charts will be easy to understand and responsive, and they can be colorful as well.

3. Logic Hop – Content Personalization for WordPress

Logic Hop

Logic Hop has been one of our favorites for a while now, and the reason is simple: Logic Hop is the best personalization plugin for WordPress — hands down.

The nitty gritty? Logic Hop is fully GDPR compliant, it works with and without caching, its support team is truly one of the best, and it will make you more money. What’s not to love?

2020 is shaping up to be the year of content personalization. Why? Savvy marketers and developers are starting to realize its true potential. By personalizing a simple call to action, you can increase conversions and sales by over 200%. This isn’t BS — it’s well documented! And Logic Hop makes it easy to do with integrations for your favorite page builders like Divi, Elementor, and Beaver Builder.

Don’t sit on the sidelines. Start personalizing today.

4. StarCat Reviews

StarCat Reviews

StarCat Reviews is an all-in-one WordPress Review plugin that meets all your review needs. Create any type of review website easily, including a user-generated review site. You can make good money reviewing products and services on your website.

It includes an advanced review system with Multiple Rating Criteria, Pros & Cons, and Review Replies that can be used in any Post, Page, WooCommerce page or CPT. It also has premium add-ons with more powerful features.

The post Top 2020 WordPress Plugins for Supercharging Your Website appeared first on SitePoint.

How to Make a Simple JavaScript Quiz

Feb 11, 2020

Description:

"How do I make a JavaScript quiz?" is one of the most common questions asked by people learning web development, and for good reason. Quizzes are fun! They’re a great way of learning about new subjects, and they allow you to engage your audience with something fun and playful.

How to Make a JavaScript Quiz

Coding your own JavaScript quiz is also a fantastic learning exercise. It teaches you how to deal with events, manipulate the DOM, handle user input, give feedback to the user and keep track of their score (for example, using client-side storage). And when you have a basic quiz up and running, there are a whole bunch of possibilities to add more advanced functionality, such as pagination. I go into this at the end of the article.

In this tutorial, I’ll walk you though creating a multi-step JavaScript quiz which you’ll be able to adapt to your needs and add to your own site. If you'd like to see what we'll be ending up with, you can skip ahead and see the working quiz.

Things to Be Aware of Before Starting

A few things to know before starting:

This is a front-end tutorial, meaning that anyone who knows how to look through the source code of a page can find the answers. For serious quizzes, the data needs to be handled through the back end, which is beyond the scope of this tutorial. The code in this article uses modern JavaScript syntax (ES6+), meaning it will not be compatible with any versions of Internet Explorer. However, it will work just fine on modern browsers, including Microsoft Edge. If you need to support older browsers, I've written a JavaScript quiz tutorial that's compatible back to IE8. Or, if you'd like a refresher on ES6, check out this course by Darin Haener over on SitePoint Premium. You'll need some familiarity with HTML, CSS, and JavaScript, but each line of code will be explained individually. The Basic Structure of Your JavaScript Quiz

Ideally, we want the quiz's questions and answers to be in our JavaScript code and have our script automatically generate the quiz. That way, we won't need to write a lot of repetitive markup, and we can add and remove questions easily.

To set up the structure of our JavaScript quiz, we'll need to start with the following HTML:

A <div> to hold the quiz A <button> to submit the quiz A <div> to display the results

Here's how that would look:

<div id="quiz"></div> <button id="submit">Submit Quiz</button> <div id="results"></div>

We can then select these HTML elements and store references to them in variables like so:

const quizContainer = document.getElementById('quiz'); const resultsContainer = document.getElementById('results'); const submitButton = document.getElementById('submit');

Next we'll need a way to build a quiz, show results, and put it all together. We can start by laying out our functions, and we'll fill them in as we go:

function buildQuiz(){} function showResults(){} // display quiz right away buildQuiz(); // on submit, show results submitButton.addEventListener('click', showResults);

Here, we have functions to build the quiz and show the results. We'll run our buildQuiz function immediately, and we'll have our showResults function run when the user clicks the submit button.

Displaying the Quiz Questions

The next thing our quiz needs is some questions to display. We'll use object literals to represent the individual questions and an array to hold all of the questions that make up our quiz. Using an array will make the questions easy to iterate over:

const myQuestions = [ { question: "Who invented JavaScript?", answers: { a: "Douglas Crockford", b: "Sheryl Sandberg", c: "Brendan Eich" }, correctAnswer: "c" }, { question: "Which one of these is a JavaScript package manager?", answers: { a: "Node.js", b: "TypeScript", c: "npm" }, correctAnswer: "c" }, { question: "Which tool can you use to ensure code quality?", answers: { a: "Angular", b: "jQuery", c: "RequireJS", d: "ESLint" }, correctAnswer: "d" } ];

Feel free to put in as many questions or answers as you want.

Note: as this is an array, the questions will appear in the order they’re listed. If you want to sort the questions in any way before presenting them to the user, check out our quick tip on sorting an array of objects in JavaScript.

Now that we have our list of questions, we can show them on the page. We'll go through the following JavaScript line by line to see how it works:

function buildQuiz(){ // variable to store the HTML output const output = []; // for each question... myQuestions.forEach( (currentQuestion, questionNumber) => { // variable to store the list of possible answers const answers = []; // and for each available answer... for(letter in currentQuestion.answers){ // ...add an HTML radio button answers.push( `<label> <input type="radio" name="question${questionNumber}" value="${letter}"> ${letter} : ${currentQuestion.answers[letter]} </label>` ); } // add this question and its answers to the output output.push( `<div class="question"> ${currentQuestion.question} </div> <div class="answers"> ${answers.join('')} </div>` ); } ); // finally combine our output list into one string of HTML and put it on the page quizContainer.innerHTML = output.join(''); }

First, we create an output variable to contain all the HTML output including questions and answer choices.

Next, we can start building the HTML for each question. We'll need to loop through each question like so:

myQuestions.forEach( (currentQuestion, questionNumber) => { // the code we want to run for each question goes here });

For brevity, we're using an arrow function to perform our operations on each question. Because this is in a forEach loop, we get the current value, the index (the position number of the current item in the array), and the array itself as parameters. We only need the current value and the index, which for our purposes, we'll name currentQuestion and questionNumber respectively.

Now let's look a the code inside our loop:

// we'll want to store the list of answer choices const answers = []; // and for each available answer... for(letter in currentQuestion.answers){ // ...add an html radio button answers.push( `<label> <input type="radio" name="question${questionNumber}" value="${letter}"> ${letter} : ${currentQuestion.answers[letter]} </label>` ); } // add this question and its answers to the output output.push( `<div class="question"> ${currentQuestion.question} </div> <div class="answers"> ${answers.join('')} </div>` );

For each question, we'll want to generate the correct HTML, and so our first step is to create an array to hold the list of possible answers.

Next, we'll use a loop to fill in the possible answers for the current question. For each choice, we're creating an HTML radio button, which we enclose in a <label> element. This is so that users will be able to click anywhere on the answer text to select that answer. If the label was omitted, then users would have to click on the radio button itself, which is not very accessible.

Notice we're using template literals, which are strings but more powerful. We'll make use of the following features:

multi-line capabilities no more having to escape quotes within quotes because template literals use backticks instead string interpolation, so you can embed JavaScript expressions right into your strings like this: ${code_goes_here}.

Once we have our list of answer buttons, we can push the question HTML and the answer HTML onto our overall list of outputs.

Notice that we're using a template literal and some embedded expressions to first create the question div and then create the answer div. The join expression takes our list of answers and puts them together in one string that we can output into our answers div.

Now that we've generated the HTML for each question, we can join it all together and show it on the page:

quizContainer.innerHTML = output.join('');

Now our buildQuiz function is complete.

You should be able to run the quiz at this point and see the questions displayed. Please note, however, that the structure of your code is important. Due to something called the temporal dead zone, you can’t reference your questions array before it has been defined.

To recap, this is the correct structure:

// Functions function buildQuiz(){ ... } function showResults(){ ... } // Variables const quizContainer = document.getElementById('quiz'); const resultsContainer = document.getElementById('results'); const submitButton = document.getElementById('submit'); const myQuestions = [ ... ]; // Kick things off buildQuiz(); // Event listeners submitButton.addEventListener('click', showResults); Displaying the Quiz Results

At this point, we want to build out our showResults function to loop over the answers, check them, and show the results.

Here's the function, which we'll go through in detail next:

function showResults(){ // gather answer containers from our quiz const answerContainers = quizContainer.querySelectorAll('.answers'); // keep track of user's answers let numCorrect = 0; // for each question... myQuestions.forEach( (currentQuestion, questionNumber) => { // find selected answer const answerContainer = answerContainers[questionNumber]; const selector = `input[name=question${questionNumber}]:checked`; const userAnswer = (answerContainer.querySelector(selector) || {}).value; // if answer is correct if(userAnswer === currentQuestion.correctAnswer){ // add to the number of correct answers numCorrect++; // color the answers green answerContainers[questionNumber].style.color = 'lightgreen'; } // if answer is wrong or blank else{ // color the answers red answerContainers[questionNumber].style.color = 'red'; } }); // show number of correct answers out of total resultsContainer.innerHTML = `${numCorrect} out of ${myQuestions.length}`; }

First, we select all the answer containers in our quiz's HTML. Then we'll create variables to keep track of the user's current answer and the total number of correct answers.

// gather answer containers from our quiz const answerContainers = quizContainer.querySelectorAll('.answers'); // keep track of user's answers let numCorrect = 0;

Now we can loop through each question and check the answers.

// for each question... myQuestions.forEach( (currentQuestion, questionNumber) => { // find selected answer const answerContainer = answerContainers[questionNumber]; const selector = `input[name=question${questionNumber}]:checked`; const userAnswer = (answerContainer.querySelector(selector) || {}).value; // if answer is correct if(userAnswer === currentQuestion.correctAnswer){ // add to the number of correct answers numCorrect++; // color the answers green answerContainers[questionNumber].style.color = 'lightgreen'; } // if answer is wrong or blank else{ // color the answers red answerContainers[questionNumber].style.color = 'red'; } });

The general gist of this code is:

find the selected answer in the HTML handle what happens if the answer is correct handle what happens if the answer is wrong.

Let's look more closely at how we're finding the selected answer in our HTML:

// find selected answer const answerContainer = answerContainers[questionNumber]; const selector = `input[name=question${questionNumber}]:checked`; const userAnswer = (answerContainer.querySelector(selector) || {}).value;

First, we're making sure we're looking inside the answer container for the current question.

In the next line, we're defining a CSS selector that will let us find which radio button is checked.

Then we're using JavaScript's querySelector to search for our CSS selector in the previously defined answerContainer. In essence, this means that we'll find which answer's radio button is checked.

Finally, we can get the value of that answer by using .value.

Dealing with Incomplete User Input

But what if the user has left an answer blank? In this case, using .value would cause an error because you can't get the value of something that's not there. To solve this, we've added ||, which means "or", and {}, which is an empty object. Now the overall statement says:

Get a reference to our selected answer element OR, if that doesn't exist, use an empty object. Get the value of whatever was in the first statement.

As a result, the value will either be the user's answer or undefined, which means a user can skip a question without crashing our quiz.

Evaluating the Answers and Displaying the Result

The next statements in our answer-checking loop will let us handle correct and incorrect answers.

// if answer is correct if(userAnswer === currentQuestion.correctAnswer){ // add to the number of correct answers numCorrect++; // color the answers green answerContainers[questionNumber].style.color = 'lightgreen'; } // if answer is wrong or blank else{ // color the answers red answerContainers[questionNumber].style.color = 'red'; }

If the user's answer matches the correct choice, increase the number of correct answers by one and (optionally) color the set of choices green. If the answer is wrong or blank, color the answer choices red (again, optional).

Once the answer-checking loop is finished, we can show how many questions the user got right:

// show number of correct answers out of total resultsContainer.innerHTML = `${numCorrect} out of ${myQuestions.length}`;

And now we have a working JavaScript quiz!

If you'd like, you can wrap the whole quiz in an IIFE (immediately invoked function expression), which is a function that runs as soon as you define it. This will keep your variables out of global scope and ensure that your quiz doesn't interfere with any other scripts running on the page.

(function(){ // put the rest of your code here })();

The post How to Make a Simple JavaScript Quiz appeared first on SitePoint.

Committing Changes to Your Codebase the Right Way

Feb 10, 2020

Description:

Committing Changes to Your Codebase the Right Way

The difference between a good and a bad commit can be huge. It's no fun having to ask your colleague — or your past self — what a particular change was about, or what the current state of things is.

This article aims to provide a thorough guide to the best practices of software commits.

Why Bother?

If you’re already storing your projects on GitHub, you might assume the files are safe and that whenever you need to update code you'll pull the changes, and that’s enough. All of that might be true. But let's see what potential problems you can avoid by going the extra mile, and what additional benefits await if you do.

No Man Is an Island, Either in Teams or Individually

The reasoning above typically comes from a developer used to working alone. But the moment they need to share code with somebody else, we can expect that things are going to get messy and require a lot of explanation. Like, a lot.

Remember that our work doesn't end at just writing code. We also need to manage things, and that requires a degree of organization and methodology. And while working in teams more readily exposes the problems caused by poor organization, we can also benefit from a better approach when working by ourselves.

Atomic vs Bloated Commits

We've all needed to revert a small change and found ourselves looking for it in a massive commit that changes dozens of files and adds multiple features. How much easier would the rollback be if it was located in one commit that only addressed that specific issue?

The Messy, Bloated Way git add * git commit -m "new components"

In this example, we can bet that a large number of files are being affected. Additionally, the message "new components" doesn't tell us much of anything — such as what components, which functionality for those components, and if the functionality is new or a refactor. Also, are any existing bugs being addressed?

That information will be important when we need to change or recover something. We'll be trying to find a needle in a haystack, and we might just end up looking at the codebase instead and spending valuable time debugging while we're at it.

The Atomic Way git add ui/login.html static/js/front-end.js git commit -m "validate input fields for login"

Now we're getting somewhere, as we start to have a clearer idea of what's going on with that one commit.

The trick is that we can semi-automatically commit changes as part of our workflow. That is, doing a block of work that does something very specific (implementing particular functionality, fixing a bug, optimizing an algorithm), testing (and write a unit test, if need be), adding a description while our memories are fresh, and committing right away. Rinse and repeat.

The Structure of a Good Commit

These rules aren't carved in stone, but can help you estimate what a good commit might look like:

unambiguous: no second guessing about what those changes do. insightful: clearly describing what the code does, even providing links or extra information when necessary, and marking the bugs or issues that are being addressed. atomic: addressing one single thing at the time (think of a "block of work", which could be anything from 20min to 2h, or even 2min if it was a quick bugfix).

Let's look at a template and break it down:

The post Committing Changes to Your Codebase the Right Way appeared first on SitePoint.

An Introduction to REST and RESTful APIs

Feb 5, 2020

Description:

An Introduction to REST and RESTful APIs

REST is an acronym for Representational State Transfer — an almost meaningless description of the most-used web service technology! REST is a way for two computer systems to communicate over HTTP in a similar way to web browsers and servers.

Sharing data between two or more systems has always been a fundamental requirement of software development. For example, consider buying motor insurance. Your insurer must obtain information about you and your vehicle so they request data from car registration authorities, credit agencies, banks, and other systems. All this happens transparently in real time to determine whether a policy can be offered.

REST Example

Open the following link in your browser to request a random programming joke:

https://official-joke-api.appspot.com/jokes/programming/random

This is a public API implemented as RESTful web service (it follows REST conventions). Your browser will show an awful JSON-formatted programming joke, such as:

[ { "id": 29, "type": "programming", "setup": "There are 10 types of people in this world...", "punchline": "Those who understand binary and those who don't" } ]

You could request the same URL and get a response using any HTTP client, such as curl:

curl "https://official-joke-api.appspot.com/jokes/programming/random"

HTTP client libraries are available in all popular languages and runtimes including Fetch in JavaScript and file_get_contents() in PHP. A JSON response is machine-readable so it can be parsed and output in HTML or any other format.

REST and the Rest

Various data communication standards have evolved over the years. You may have encountered standards including CORBA, SOAP, or XML-RPC, which usually established strict messaging rules.

REST was defined in 2000 by Roy Fielding and is considerably simpler. It's not a standard but a set of recommendations and constraints for RESTful web services. These include:

Client-Server. SystemA makes an HTTP request to a URL hosted by SystemB, which returns a response.

It's identical to how a browser works. The application makes a request for a specific URL. The request is routed to a web server that returns an HTML page. That page may contain references to images, style sheets, and JavaScript, which incur further requests and responses.

Stateless. REST is stateless: the client request should contain all the information necessary to respond to a request. In other words, it should be possible to make two or more HTTP requests in any order and the same responses will be received.

Cacheable. A response should be defined as cacheable or not.

Layered. The requesting client need not know whether it’s communicating with the actual server, a proxy, or any other intermediary.

Creating a RESTful Web Service

A RESTful web service request contains:

An Endpoint URL. An application implementing a RESTful API will define one or more URL endpoints with a domain, port, path, and/or querystring — for example, https://mydomain/user/123?format=json.

The HTTP method. Differing HTTP methods can be used on any endpoint which map to application create, read, update, and delete (CRUD) operations:

HTTP method CRUD Action GET read returns requested data POST create creates a new record PUT or PATCH update updates an existing record DELETE delete deletes an existing record

Examples:

a GET request to /user/ returns a list of registered users on a system a POST request to /user/123 creates a user with the ID 123 using the body data a PUT request to /user/123 updates user 123 with the body data a GET request to /user/123 returns the details of user 123 a DELETE request to /user/123 deletes user 123

HTTP headers. Information such as authentication tokens or cookies can be contained in the HTTP request header.

Body Data. Data is normally transmitted in the HTTP body in an identical way to HTML <form> submissions or by sending a single JSON-encoded data string.

The Response

The response payload can be whatever is practical: data, HTML, an image, an audio file, and so on. Data responses are typically JSON-encoded, but XML, CSV, simple strings, or any other format can be used. You could allow the return format to be specified in the request — for example, /user/123?format=json or /user/123?format=xml.

An appropriate HTTP status code should also be set in the response header. 200 OK is most often used for successful requests, although 201 Created may also be returned when a record is created. Errors should return an appropriate code such as 400 Bad Request, 404 Not Found, 401 Unauthorized, and so on.

Other HTTP headers can be set including the Cache-Control or Expires directives to specify how long a response can be cached before it’s considered stale.

However, there are no strict rules. Endpoint URLs, HTTP methods, body data, and response types can be implemented as you like. For example, POST, PUT, and PATCH are often used interchangeably so any will create or update a record.

REST "Hello World" Example

The following code creates a RESTful web service using the Node.js Express framework. A single /hello/ endpoint responds to GET requests.

Ensure you have Node.js installed, then create a new folder named restapi. Create a new package.json file within that folder with the following content:

{ "name": "restapi", "version": "1.0.0", "description": "REST test", "scripts": { "start": "node ./index.js" }, "dependencies": { "express": "4.17.1" } }

Run npm install from the command line to fetch the dependencies, then create an index.js file with the following code:

// simple Express.js RESTful API 'use strict'; // initialize const port = 8888, express = require('express'), app = express(); // /hello/ GET request app.get('/hello/:name?', (req, res) => res.json( { message: `Hello ${req.params.name || 'world'}!` } ) ); // start server app.listen(port, () => console.log(`Server started on port ${port}`); );

Launch the application from the command line using npm start and open http://localhost:8888/hello/ in a browser. The following JSON is displayed in response to the GET request:

{ "message": "Hello world!" }

The API also allows a custom name, so http://localhost:8888/hello/everyone/ returns:

{ "message": "Hello everyone!" }

The post An Introduction to REST and RESTful APIs appeared first on SitePoint.

Practical Ways to Advance Your TypeScript Skills

Feb 4, 2020

Description:

Practical Ways to Advance Your TypeScript Skills

As a programmer, it can feel like you know something well enough to be dangerous. For some situations, this is fine. All you need to know are these three methods in this programming language. Other times we want to develop expertise. In this article, we'll cover ways to advance your TypeScript skills to that next level.

Below are various ways you can further your TypeScript abilities. These are in no particular order.

Learn in Public

One of my favorite movements is #LearnInPublic, which gained traction after Shawn Wang published a gist. When you tell others what you're doing, opportunities arise. These come in various forms.

One form is connecting with others. You’re now viewed at as someone in the "TypeScript" space because people see you working with it. They may reach out for help. They may ask you questions. They may even ask you to do freelance work or content creation. You never know.

Another door that it opens is the ability to teach others. Chances are something you learn, then explain, may unlock someone else's understanding of that topic. They see your posts and level up their skills. It's a win-win.

Ultralearning

Coined by Scott Young, ultralearning is a "strategy for aggressive, self-directed learning." Think of it like creating a college course, then doing the material at a 2x pace. The more challenging you make it, the more fulfilling it is to do. Scott provides an excellent guide for creating your own ultralearning project. I highly recommend this approach if you can make the time. Commit to a month of TypeScript and see how deep you can go.

Create Utility Types from Scratch

In the TypeScript Handbook, you can find a list of the built-in utility types. One exercise you can do is try writing them from scratch. I did this myself with the Readonly and the Partial utility types. It's a fun challenge and will help you understand more complex concepts.

The post Practical Ways to Advance Your TypeScript Skills appeared first on SitePoint.

5 jQuery.each() Function Examples

Feb 3, 2020

Description:

5 jQuery.each() Function Examples

This is an extensive overview of the jQuery.each() function — one of jQuery’s most important and most used functions. In this article, we’ll find out why and take a look at how you can use it.

What is jQuery.each()

jQuery’s each() function is used to loop through each element of the target jQuery object — an object that contains one or more DOM elements, and exposes all jQuery functions. It’s very useful for multi-element DOM manipulation, as well as iterating over arbitrary arrays and object properties.

In addition to this function, jQuery provides a helper function with the same name that can be called without having previously selected or created any DOM elements.

jQuery.each() Syntax

Let’s see the different modes in action.

The following example selects every <div> element on a web page and outputs the index and the ID of each of them:

// DOM ELEMENTS $('div').each(function(index, value) { console.log(`div${index}: ${this.id}`); });

A possible output would be:

div0:header div1:main div2:footer

This version uses jQuery’s $(selector).each() function, as opposed to the utility function.

The next example shows the use of the utility function. In this case the object to loop over is given as the first argument. In this example, we'll show how to loop over an array:

// ARRAYS const arr = [ 'one', 'two', 'three', 'four', 'five' ]; $.each(arr, function(index, value) { console.log(value); // Will stop running after "three" return (value !== 'three'); }); // Outputs: one two three

In the last example, we want to demonstrate how to iterate over the properties of an object:

// OBJECTS const obj = { one: 1, two: 2, three: 3, four: 4, five: 5 }; $.each(obj, function(key, value) { console.log(value); }); // Outputs: 1 2 3 4 5

This all boils down to providing a proper callback. The callback’s context, this, will be equal to its second argument, which is the current value. However, since the context will always be an object, primitive values have to be wrapped:

$.each({ one: 1, two: 2 } , function(key, value) { console.log(this); }); // Number { 1 } // Number { 2 }

`

This means that there's no strict equality between the value and the context.

$.each({ one: 1 } , function(key, value) { console.log(this == value); console.log(this === value); }); // true // false

`

The first argument is the current index, which is either a number (for arrays) or string (for objects).

1. Basic jQuery.each() Function Example

Let’s see how the jQuery.each() function helps us in conjunction with a jQuery object. The first example selects all the a elements in the page and outputs their href attribute:

$('a').each(function(index, value){ console.log(this.href); });

The second example outputs every external href on the web page (assuming the HTTP(S) protocol only):

$('a').each(function(index, value){ const link = this.href; if (link.match(/https?:\/\//)) { console.log(link); } });

Let’s say we had the following links on the page:

<a href="https://www.sitepoint.com/">SitePoint</a> <a href="https://developer.mozilla.org">MDN web docs</a> <a href="http://example.com/">Example Domain</a>

The second example would output:

https://www.sitepoint.com/ https://developer.mozilla.org/ http://example.com/

We should note that DOM elements from a jQuery object are in their "native" form inside the callback passed to jQuery.each(). The reason is that jQuery is in fact just a wrapper around an array of DOM elements. By using jQuery.each(), this array is iterated in the same way as an ordinary array would be. Therefore, we don’t get wrapped elements out of the box.

With reference to our second example, this means we can get an element's href attribute by writing this.href. If we wanted to use jQuery's attr() method, we would need to re-wrap the element like so: $(this).attr('href').

The post 5 jQuery.each() Function Examples appeared first on SitePoint.

How to Tackle a Python Interview

Jan 30, 2020

Description:

How to Tackle a Python Interview

Have you cleared the first round of calls with the HR? Are you going for a Python interview in person? If you’re wondering what Python-related questions may be asked, this guide should be of help.

In the first section, we’ll discuss a few questions about Python's philosophy — those that help you make decisions about the architecture of a project. In the next section, we cover questions related to the Pythonic way of programming — which may manifest in the form of review or finding the output of a code snippet.

A word of caution before we start. This guide talks primarily about Python's built-in capabilities. The aim of this guide is to help you get up to speed with the inherent Python functionalities that enable quick development. So we won't be able to cover every question you may face from the various types of companies out there.

Development in Python: Project Architecture What is Python? Why should you use Python?

If you’re interviewing for a Python role, you should have a clear idea of what Python is and how it’s different from other programming languages. Here are a few key points regarding Python that you should be aware of.

First, you should not be wrong about the etymology. A large section of Python programmers wrongly think that Guido van Rossum named it after the snake! On the contrary, Python is named after British sketch comedy Monty Python's Flying Circus. The next time you see a Python book with a snake on the cover, you may perhaps wish to stay away from it.

Next, Python is a high level, object-oriented, interpreted programming language. This means that Python code is executed line by line. Python is also dynamically typed, as it doesn’t require you to specify the type of variables when declaring them.

Given Python's ease of use, it has found uses for common automation tasks. Python is often the go-to scripting choice for programmers who know multiple languages. With the increasing popularity of Python-based web frameworks like Django and Flask, Python's share of the pie has increased significantly in recent years.

Limitations of Python

While it’s good to know about the capabilities of a programming language, it's also good to be aware of its limitations to truly comprehend the situations you need to be wary of.

The first limitation of Python is execution speed. Though development in Python is quick, executing a similar block of Python code is often slower compared to compiled languages such as C++. For this reason, hackathons often give Python programs some extra time for execution. There are ways to circumvent this issue, though. For instance, you can integrate Python with a compiled language like C to perform the core processing through the other language.

In a world which is going mobile first, Python is not native to mobile development. You will rarely find mobile applications developed in Python. The two major mobile operating systems, Android and iOS, don’t support Python as an official programming language.

Package Determination: Django vs Flask

In addition to Python's capabilities and limitations, a category of questions that are popular in interviews focuses around choosing between packages based on your requirements. Let’s look at one approach that you may take when tackling such questions.

Let's say you’re given a choice between Django and Flask to start a web application. The answer to this question should lie within an amalgamation of the requirements of the project and the culture of the organization.

At the outset, you should know that with the use of plugins, there’s no right answer here: you can create the similar applications using either framework. However, there’s a stark difference between the design philosophies of each framework. Flask provides you the bare minimum features for you to create a web application like URL routing, templating, unit testing and a development server, thereby giving you a lot of freedom to design your application. On the other hand, Django provides you a large array of built in features from the beginning — database support, extensive admin functionality, and security features.

If you’re building an application that will use relational databases, with a lot of dynamic content, you should probably choose Django. However, if you’re looking for a lot of freedom in your project, you should opt for Flask.

The post How to Tackle a Python Interview appeared first on SitePoint.

Machine Learning Pipelines: Setting Up On-premise Kubernetes

Jan 29, 2020

Description:

Machine Learning Pipelines for the Scrappy Startup Part 1: Setting Up On-premise Kubernetes

In this multi-part series, I'll walk you through how I set up an on-premise machine learning pipeline with open-source tools and frameworks.

Prologue: Model Training is Just A Tiny Part

When most people think about machine learning, they imagine engineers and data scientists tweaking network architectures, loss functions, and tuning hyper-parameters, coupled with the constant retraining until the results are satisfactory.

Indeed, training machine learning models takes a lot of hard work. A tremendous amount of time and resources are expended on research and experimentation.

However, there comes a point in time when you need to start to productionize the model that you've lovingly trained and tuned. And oh, by the way, the model is expected to perform as well on next weeks' batch of data.

It slowly dawns on you that Machine Learning is much bigger than models, hyper-parameters, and loss functions. It's also what happens before, during, and after training. And it doesn't end there, because you would also need to think about re-training, especially when you get new data, since there's no guarantee that the model is going to generalize as well.

There's a very well known diagram that succinctly illustrates the issue:

The scope of machine learning

In short, you need to build a machine learning pipeline that can get you from raw data to the trained model in the shortest possible time. But here's the catch: because you're part of a scrappy startup and not flushed with VC money, you're going to have to make do with the servers you have, and not rely on the paid cloud offerings of Amazon, Microsoft or Google, at least for the time being.

This is because you need a safe environment to learn and experiment in — one that won't unexpectedly shock you with a nasty bill at the end of the month.

Who You Are

You could be a software engineer at a company that's starting to think about putting its machine learning models to production, or you could be running solo and curious about what "real-world" machine learning looks like. In both cases, you would need to know about machine learning pipelines.

What You Need to Know

You should be comfortable with Linux. The examples will assume Ubuntu Linux 18.04, though slightly dated or more recent versions shouldn't cause any major issues.

You should have some working knowledge of Docker. If you know how to build images in Docker, and how to execute containers, you should be good to go. If you don't, you shouldn't worry too much: I'll guide you with enough background information, and code examples will be explained.

While this is an article about Machine Learning pipelines, this article is not about the intricacies involved in training a model.

We're going to use Kubernetes. You don't need to be an expert in it. If you are completely new to Kubernetes, that's OK. By the end of the series, you'll have at least some hands-on experience. On the other hand, I'm not going to go very deep into Kubernetes specifics. Some commands I'll have to gloss over in the interests of brevity. Besides, the real objective here to help you deploy machine learning pipelines as efficiently as possible.

Here are some other assumptions that I'm making about you, the astute reader:

you're not entirely clueless about Machine Learning you have access to some relatively beefy servers (ideally more than one) that contain Nvidia GPUs you have an existing machine learning code base that's written in Python you don't work in a unicorn startup or Fortune 500 and therefore are not so flush with cash that you can happily spin up multiple V100s. What Are We Going to Do?

Machine learning pipelines only recently have gotten more love and attention, and people are just only beginning to figure everything out. Put in another way, there are multiple ways to build machine learning pipelines, because every organization has unique requirements, and every team has their favorite tool.

What this series aims to offer is one possible way to do it, and that's especially important when you're starting out, because the amount of information is often very overwhelming. Also, installing Kubernetes is a daunting affair, littered with many roadblocks. I hope this article helps with smoothening that path.

After you've learned a way to build a machine learning pipeline, you'll then be equipped with enough skills and knowledge to go build one to suit your organization's needs.

Here's a list of some of the tools I'll cover in this series:

Docker Kubernetes Rancher KubeFlow/KubeFlow Pipelines Minio Tensorflow On On-premise

As you'll realize soon as you follow through the series, many of these tools assume that you have storage on Amazon S3 or Google Cloud Storage, which, to put it mildly, not a very good assumption. Thus this series shows how to work around some of these limitations without losing any of the functionality.

Of course, at some point in time, you'll outgrow and would need something more capable. However, especially when you're starting (that is, you happen to be the first Data Engineer on the team), then on-premise would seem a more cost-effective and ultimately the more educational choice.

Installing Kubernetes the Easy Way with Rancher

Let's start immediately with one of the harder bits — Installing Kubernetes.

The main thing you need to know about Kubernetes is that it's a container-orchestration system for automating application deployment, scaling, and management.

There are many ways to install Kubernetes, and it’s not a trivial process. Fortunately, that's tools like Rancher make the installation process much more pleasant and less error-prone. In particular, we’re going to use the Rancher Kubernetes Engine (RKE) to help us install Kubernetes.

At the point of this writing, the latest stable release of rke is 1.0.0.

Step 0: Prepare the Machines

The following steps assume that you have access to two Linux machines that are connected to the same LAN.

We’re going to set up a minimal cluster consisting of two machines, one named master and the other worker. Of course, you can name your machines whatever you want, as long as you designate one machine to be master, and the rest to be workers.

If you only have access to one machine, you can get by with creating two virtual machines, and make sure to enable Bridged Adapter. In fact, in preparation for this article, I'm testing everything out of Oracle's VirtualBox. Here are my settings:

Oracle VM VirtualBox Manager settings

Notice here that I have two VMs: master and node. Enable the Bridged Adapter and also setting Promiscuous Mode to Allow All.

The downside to that is that you wouldn't be able to access the GPUs, and you would most likely notice that the performance won't be ideal because Kubernetes tends to be quite demanding in terms of resources. Again, that's OK if you’re trying this at home or have only access to a single machine at the moment.

Here are some important details about the machines (you should have them on hand too for the configuration steps that follow):

Master Worker IP 192.168.86.36 192.168.86.35 User ubuntu ubuntu Hostname master worker SSH Keys ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub Role Control Plane, Etcd Worker DNS and Load Balancing

In a production environment, you would need a hostname to point to your Kubernetes cluster. However, in this article I'm assuming you don't have one readily available, so we're going to have to fake it.

Another thing I won't cover — to keep things simple — is load balancing when it comes to the Rancher installation.

For our purposes, I'm going to use rancher-demo.domain.test as the hostname.

In both machines, open /etc/hosts file:

sudo vim /etc/hosts

Enter the following:

192.168.86.35 worker 192.168.86.35 rancher-demo.domain.test 192.168.86.36 master 127.0.0.1 localhost

Notice here that the worker node has the additional hostname of rancher-demo.domain.test. In a slightly more realistic environment, you'd have something like NGINX as a front-end to load balance between multiple worker nodes.

*Note: If you're using a Virtual Machine, then most likely you'd be using the Ubuntu Server image, which typically doesn't come with a desktop environment. Therefore, you should also have an entry in the host computer to include this:

192.168.86.35 rancher-demo.domain.test

That way, you'll be able to access Rancher from a browser on the host computer.*

Step 1: Obtain the rke Binary

Important!: This step should only be performed on master.

Head over to the GitHub page to download the rke binary. Next, rename the binary to rke, followed by making it executable. Finally, move the binary to a location in the PATH, where /usr/local/bin is usually a good choice.

Important: make sure you select the right binary for your OS!

$ wget https://github.com/rancher/rke/releases/download/v1.0.0/rke_linux-amd64 $ mv rke_linux-amd64 rke $ chmod +x rke $ sudo mv rke /usr/local/bin

Now let's see if everything works:

$ rke

This should return:

NAME: rke - Rancher Kubernetes Engine, an extremely simple, lightning fast Kubernetes installer that works everywhere USAGE: rke [global options] command [command options] [arguments...] VERSION: v1.0.0 AUTHOR(S): Rancher Labs, Inc. COMMANDS: up Bring the cluster up remove Teardown the cluster and clean cluster nodes version Show cluster Kubernetes version config Setup cluster configuration etcd etcd snapshot save/restore operations in k8s cluster cert Certificates management for RKE cluster encrypt Manage cluster encryption provider keys help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug, -d Debug logging --quiet, -q Quiet mode, disables logging and only critical output will be printed --help, -h show help --version, -v print the version

The post Machine Learning Pipelines: Setting Up On-premise Kubernetes appeared first on SitePoint.

What SSL Is, and Which Certificate Type is Right for You

Jan 29, 2020

Description:

What SSL Is, and Which Certificate Type is Right for You

This article was created in partnership with GoGetSSL. Thank you for supporting the partners who make SitePoint possible.

Over the last decade, the rate of cyber crime has risen sharply. Already, many reputable business organizations and government agencies that haven't implemented sufficient online security have been caught with their pants down. Google has started taking a strong stand against websites that don’t use HTTPS. Website visitors will be notified if they’re about to submit any information over an unsecured connection.

In this article, you’ll learn how to protect your customers and your business from privacy invasion and data theft. You’ll learn how to use SSL technology to secure your websites and your applications from leaking sensitive data to eavesdroppers.

I won't be able to show you how to install SSL, as that's an advanced topic. You can find more information on the installation process here.

How SSL Works in Plain English

Imagine you're in your hotel room, on your laptop, connected to the hotel's WIFI. You're about to log in to your bank's online portal. Meanwhile, a nefarious hacker has cleverly booked a room next to yours and has set up a simple workstation that listens to all network traffic in the hotel building. All traffic using the HTTP protocol can be seen by the hacker in plain text.

Assuming the bank's website is using only HTTP, form details such as user name and password will be seen by the hacker as soon you press submit. So how do we protect this data? The answer is obviously encryption. Encryption of data involves converting plain text data to something that looks garbled — aka encrypted data. To encrypt plain text data, you need what's called an encryption algorithm and a cipher key.

Let's say you were to encrypt the following data:

Come on over for hot dogs and soda!

It will look something like this in encrypted form:

wUwDPglyJu9LOnkBAf4vxSpQgQZltcz7LWwEquhdm5kSQIkQlZtfxtSTsmaw q6gVH8SimlC3W6TDOhhL2FdgvdIC7sDv7G1Z7pCNzFLp0lgB9ACm8r5RZOBi N5ske9cBVjlVfgmQ9VpFzSwzLLODhCU7/2THg2iDrW3NGQZfz3SSWviwCe7G mNIvp5jEkGPCGcla4Fgdp/xuyewPk6NDlBewftLtHJVf =PAb3

Decrypting the above message without the cipher key can take more than a lifetime using current computing power. No one can read it unless they have the cipher key that was used to encrypt it. This type of encryption is known as symmetric encryption. Now that we've figured out how to protect data, we need a safe way to transmit the cipher key to the recipient of the message safely. We can do this by using an asymmetric encryption system known as public key cryptography.

Public Key Cryptography uses a pair of mathematically related cipher keys:

Public key: can be safely shared with anyone Private key: must never be transmitted, stored in secret

When one key is used to encrypt, the other one is used to decrypt. The same key can't be used to decrypt what it encrypted. Below is a depiction of how it works:

public key algorithm

However, we can't trust any public key issued to us since they can be generated by anyone. To ensure authenticity of public keys, they need to be packaged in what's called an SSL certificate. This is a signed digital file that contains the following information:

Subject's name: individual, organization or machine name Public Key Digital Signature (certificate's fingerprint) Issuer (the entity that signed the certificate) Valid dates (start and expiry)

I've only listed the necessities. SSL certificates usually contain more information. Here's a real-world example:

SSL certificate example

As you can see, the above certificate has been signed (see thumbnail section). A digital signature is simply an encrypted hash of a file. Let's first explain what a hash is. Say you have a 100-word document, and you run it through a hashing program. You'll get the following hash:

46798b5cfca45c46a84b7419f8b74735

If you change anything in the document, even if it's adding single full stop, a completely new hash will be generated when you run the hashing function again:

bc527343c7ffc103111f3a694b004e2f

A mismatch in the hash between the hash sent and the one generated means that the file has been altered. This is the first line of defense for ensuring that an SSL certificate hasn’t been altered. However, we need to verify that sent hash was created by the issuer of the certificate. This is done by encrypting the hash using the issuer's private key. When we perform a local hash of the certificate, then decrypt the certificate's signature to obtain the sent hash, we can compare the two. If there’s a match, it means:

the certificate hasn’t been altered by someone else we have proof the certificate came from the issuer, since we've successfully decrypted the signature using their public key we can trust the authenticity of the public key attached in the SSL certificate.

signature verification

Now, you may be wondering where we get the issuer's public key and why we should trust it. Well, the issuer's public key already comes pre-installed inside our operating systems and browsers. An issuer is a trusted certificate authority (CA) that signs certificates in compliance with the official CA/Browser Forum guidelines and NIST recommendations. For example, here’s a list of trusted issuers/CAs that you’ll find on Microsoft's Operating System. Even smartphones and tablets have a similar list pre-installed on the OS and browser.

According to a survey conducted by W3Techs on May 2018, the following issuers account for about 90% of valid certificates signed globally:

IdenTrust Comodo DigiCert (acquired by Symantec) GoDaddy GlobalSign

Now that you have an understanding of encryption and SSL technology, it's best to go over how you can safely sign in to your bank's portal using HTTPS without the hacker next door reading your traffic.

Your laptop's browser starts by requesting the bank's servers for its SSL certificate. The server sends it. Then the browser checks the certificate is authentic against a list of trusted CAs. It also checks that it hasn’t expired and hasn’t been revoked. If everything checks out, the browser generates a new cipher key (also known as the session key). Using the public key found on the SSL certificate, it’s encrypted and then sent to the server. The server decrypts the session key using its private key. From now on, all communication sent back and forth will be encrypted using the session key. Symmetric encryption is faster than asymmetric.

This means both form data going from the laptop, and HTML data coming from the server, will be encrypted using a cipher key that the hacker won't have access to. All that will be seen in the captured traffic logs will be garbled letters and numbers. Your information has now been protected and kept private from prying eyes.

Now that you understand how SSL in general works, let's move on to the next section an look at the different types of SSL certificates we can use.

The post What SSL Is, and Which Certificate Type is Right for You appeared first on SitePoint.

Leave an Impression with Print Peppermint’s Fresh Designs & Premium Paper

Jan 28, 2020

Description:

This article was created in partnership with Print Peppermint. Thank you for supporting the partners who make SitePoint possible.

Everyone has a business card—is yours any different? Designing the ideal card for your business is a project that deserves real time and thought. For a fresh approach, consider Print Peppermint. Their in-house design services ensure your business cards will be absolutely custom and unique, and their high-end special finishes add a touch of class.

They’re not the new kids on the block. Over the last seven years, they’ve produced thousands of innovative print projects, and have attracted the business of industry-leading creative companies such as Vice, Google, Geico, Wendy’s. They’ve even printed circular die-cut business cards for Grammarly—one of our office-favorite web apps! They can hand-craft something unique for you, too.

You could get started with their Free Online Design Tool. It’s an easy-to-use online app that will help you design your own business cards, posters, flyers, invitations, and greeting cards from a blank canvas.

But you’ll get the best results if you leave the design work to the experts, and hire one of their in-house graphic designers. Give them as much input as you can—sketches, ideas, links to anything that inspires you—and they’ll bring it to life in a fresh, unique way. They can even design a logo for your business. Every single order is hand-proofed, no matter how large or small.

But what you put on your business card is just the start. Give some serious thought to the card itself.

First, the paper you print on can make a very tactile impression. Choose from a meticulously-curated family of thick and premium papers, including 100% Cotton, Soft-Touch, Triplex Layered, Clear-Frosted Plastic, Onyx Black Suede, Recycled Kraft, and many more.

Second, choose from a wide range of special finishes. They stand out and make a strong statement about your business. These include foil stamping, die-cutting, embossing, letterpress, edge painting, and more.

The ideal combination of design, paper, and finish creates a strong impression. Here are a few examples of what you can achieve.

A “letterpress finish” can be engaging, and debossing makes an impression, literally. Here’s blind debossing on thick, cotton paper.

By using clear plastic, your business cards will be as durable as credit cards. Frosted PVC makes a statement to your customers that you have high standards and care about quality.

Metallic foil stamped business cards have a premium look. Choose from 15 colors—here’s one in copper. You can even make photos metallic.

Raised foil feels more dramatic, similar to embossing. Here’s how it looks in gold on premium suede, soft-touch paper.

If you find rectangles boring, you can have your business cards die cut to absolutely any custom shape. Imagine the possibilities!

And even the edges of your cards can be custom painted.

The right business card ensures the perfect introduction to your business, so take your time deciding on a design. Possibly the best way to get inspiration is to order their $10 Sample Pack. You’ll be able to touch, smell and explore over twenty premium products and papers, and as a bonus you’ll get $25 off your order. That sounds like a good deal to me.

Another excellent source of inspiration is the company’s design blog. It’s filled with articles on topics like Graphic Design and Photography, but you’ll want to start in the Business Card Inspiration section. If you get stuck designing your own card, it can be really helpful to look at what other people have come up with. The blog contains over 200 articles that explore recently-completed business card projects.

How do you feel about your current business card now? If you’re ready for something new, all Print Peppermint products are supported with a 100% money-back quality guarantee. They offer amazing group-order discounts for businesses and organizations with multiple employees.

Armed with the right business card, all you need now is to perfect how to hand out your business card with style!

The post Leave an Impression with Print Peppermint’s Fresh Designs & Premium Paper appeared first on SitePoint.

How to Install MySQL

Jan 28, 2020

Description:

How to Install MySQL

Almost all web applications require server-based data storage, and MySQL continues to be the most-used database solution. This article discusses various options for using MySQL on your local system during development.

MySQL is a free, open-source relational database. MariaDB is a fork of the database created in 2010 following concerns about the Oracle acquisition of MySQL. (It's is functionally identical, so most of the concepts described in this article also apply to MariaDB.)

While NoSQL databases have surged in recent years, relational data is generally more practical for the majority of applications. That said, MySQL also supports NoSQL-like data structures such as JSON fields so you can enjoy the benefits of both worlds.

The following sections examine three primary ways to use MySQL in your local development environment:

cloud-based solutions using Docker containers installing on your PC. Cloud-based MySQL

MySQL services are offered by AWS, Azure, Google Cloud, Oracle, and many other specialist hosting services. Even low-cost shared hosts offer MySQL with remote HTTPS or tunneled SSH connections. You can therefore use a MySQL database remotely in local development. The benefits:

no database software to install or manage your production environment can use the same system more than one developer can easily access the same data it's ideal for those using cloud-based IDEs or lower-specification devices such as Chromebooks features such as automatic scaling, replication, sharding, and backups may be included.

The downsides:

set-up can still take considerable time connection libraries and processes may be subtly different across hosts experimentation is more risky; any developer can accidentally wipe or alter the database development will cease when you have no internet connection there may be eye-watering usage costs.

A cloud-based option may be practical for those with minimal database requirements or large teams working on the same complex datasets.

Run MySQL Using Docker

Docker is a platform which allows you to build, share, and run applications in containers. Think of a container as an isolated virtual machine with its own operating system, libraries, and the application files. (In reality, containers are lightweight processes which share resources on the host.)

A Docker image is a snapshot of a file system which can be run as a container. The Docker Hub provides a wide range of images for popular applications, and databases including MySQL and MariaDB. The benefits:

all developers can use the same Docker image on macOS, Linux, and Windows MySQL installation configuration and maintenance is minimal the same base image can be used in development and production environments developers retain the benefits of local development and can experiment without risk.

Docker is beyond the scope of this article, but key points to note:

Docker is a client–server application. The server is responsible for managing images and containers and can be controlled via a REST API using the command line interface. You can therefore run the server daemon anywhere and connect to it from another machine. Separate containers should be used for each technology your web application requires. For example, your application could use three containers: a PHP-enabled Apache web server, a MySQL database, and an Elasticsearch engine. By default, containers don’t retain state. Data saved within a file or database will be lost the next time the container restarts. Persistency is implemented by mounting a volume on the host. Each container can communicate with others in their own isolated network. Specific ports can be exposed to the host machine as necessary. A commercial, enterprise edition of Docker is available. This article refers to the open-source community edition, but the same techniques apply. Install Docker

Instructions for installing the latest version of Docker on Linux are available on Docker Docs. You can also use official repositories, although these are likely to have older editions. For example, on Ubuntu:

sudo apt-get update sudo apt-get remove docker docker-engine docker.io sudo apt install docker.io sudo systemctl start docker sudo systemctl enable docker

Installation will vary on other editions of Linux, so search the Web for appropriate instructions.

Docker CE Desktop for macOS Sierra 10.12 and above and Docker CE Desktop for Windows 10 Professional are available as installable packages. You must register at Docker Hub and sign in to download.

Docker on Windows 10 uses the Hyper-V virtualization platform, which you can enable from the Turn Windows features on or off panel accessed from Programs and Features in the the Control Panel. Docker can also use the Windows Subsystem for Linux 2 (WSL2 — currently in beta).

To ensure Docker can access the Windows file system, choose Settings from the Docker tray icon menu, navigate to the Shared Drives pane, and check which drives the server is permitted to use.

Docker shared drives on Windows

Check Docker has successfully installed by entering docker version at your command prompt. Optionally, try docker run hello-world to verify Docker can pull images and start containers as expected.

Run a MySQL Container

To make it easier for Docker containers to communicate, create a bridged network named dbnet or whatever name you prefer (this step can be skipped if you just want to access MySQL from the host device):

docker network create --driver bridge dbnet

Now create a data folder on your system where MySQL tables will be stored — such as mkdir data.

The most recent MySQL 8 server can now be launched with:

docker run -d --rm --name mysql --net dbnet -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysecret -v $PWD/data:/var/lib/mysql mysql:8

Arguments used:

-d runs the container as a background service. --rm removes the container when it stops running. --name mysql assigns a name of mysql to the container for easier management. -p 3306:3306 forwards the container port to the host. If you wanted to use port 3307 on the host, you would specify -p 3307:3306. -e defines an environment variable, in this case the default MySQL root user password is set to mysecret. -v mounts a volume so the /var/lib/mysql MySQL data folder in the container will be stored at the current folder's data subfolder on the host.

$PWD is the current folder, but this only works on macOS and Linux. Windows users must specify the whole path using forward slash notation — such as /c/mysql/data.

The first time you run this command, MySQL will take several minutes to start as the Docker image is downloaded and the MySQL container is configured. Subsequent restarts will be instantaneous, presuming you don’t delete or change the original image. You can check progress at any time using:

docker logs mysql Using the Container MySQL Command-line Tool

Once started, open a bash shell on the MySQL container using:

docker exec -it mysql bash

Then connect to the MySQL server as the root user:

mysql -u root -pmysecret

-p is followed by the password set in Docker's -e argument shown above. Don’t add a space!

Any MySQL commands can now be used — such as show databases;, create database new; and so on.

Use a MySQL client

Any MySQL client application can connect to the server on port 3306 of the host machine.

If you don't have a MySQL client installed, Adminer is a lightweight PHP database management tool which can also be run as a Docker container!

docker run -d --rm --name adminer --net dbnet -p 8080:8080 adminer

Once started, open http://localhost:8080 in your browser and enter mysql as the server name, root as the username, and mysecret as the password:

Adminer

Databases, users, tables, and associated settings can now be added, edited, or removed.

The post How to Install MySQL appeared first on SitePoint.

5 Web Design Trends for 2020 with Real Staying Power

Jan 27, 2020

Description:

5 Web Design Trends for 2020 with Real Staying Power

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

The start of a new year is always an exciting time. Everyone makes pledges to be better, to feel better, to do better. But like many New Years’ resolutions that quickly get tossed aside or forgotten, fad-like web design trends have a tendency to follow a similar path.

That’s because it’s easy to get wrapped up in what’s cool now instead of focusing on what we can do to make a website grow stronger with each passing day.

5 Web Design Trends That Have Real Staying Power

Have a look at BeTheme and its 500+ pre-built websites. These clean and classic designs have real staying power.

That’s because they’re centered around strong design principles. Not flashy color palettes, hip font choices, or technology your users aren’t ready for or don’t need. And that’s the key, right?

You’re building websites for the end user — not for yourself.

Spending your time using web design trends that will come and go is making barely any impact on your visitors. You can take advantage of these tried, tested, and long-lasting web design trends …

Trend #1: Remove the Excess and Create a Super Minimal Navigation

One of the awesome advantages of the Web moving towards a more mobile-first experience is that websites viewed on desktop have become simpler and easier to navigate, too.

With more consumers flocking to the Web on their smartphones, website menus have had to shrink in size. Not only in terms of the space they comprise, but also in the number of links.

In 2020 and onwards, websites will have only the most essential of pages in the primary menu. Secondary links will be relegated to areas like the footer and sidebar. Consequently, this will help to clean up on-page designs. They won’t be so littered with call-to-action buttons pointing to internal pages.

The BeRepair pre-built site is a beautiful example of this. With its navigation tucked away under a hamburger menu icon:

BeRepair pre-built site

When opened, the pop-out menu follows the trend of less-is-more. It has a short and simple-to-navigate list of links surrounded by a bunch of white space:

menu

Non-traditional navigations can also benefit from this form of minimalism. BeGarden demonstrates this with its left-aligned menu:

left-aligned menu

Trend #2: Bring Greater Focus to Your Message with White Space

Users are overloaded with content, offers, and other distractions every time they hit the Internet. Do you really want your website to be one more thing that causes them stress and, in turn, indecisiveness?

When you design with white space, it gets you out of the habit of trying to put as much information and as many options into a single section or page as possible. Instead, it encourages you to do more with less.

Concise messaging + Wide, open spaces = Good for your conversion rate.

Take, for instance, this brief but powerful video banner at the bottom of BeWine:

video banner

You can find other ways to let strong yet simple imagery tell your brand’s story, as the BeWeddingPlanner site does:

tell your brand’s story

The post 5 Web Design Trends for 2020 with Real Staying Power appeared first on SitePoint.

Trends in Python: What’s Hot in the Hottest Language Today

Jan 23, 2020

Description:

Trends in Python: What's Hot in the Hottest Language Today

Python is arguably the programming language nowadays. We'll explore why that might be the case, what the current trends within the Python community are, and what packages and tools you might want to get acquainted with if you don't want to be left behind.

If you were pondering what programming language you should be investing time and effort in, you can stop searching now. It’s Python.

Alright, that was an oversimplification. Admittedly, you aren't going to jump into a Java project that's been in development for years just to port all that code into Python just because it's “hot”. Programming languages are a means to an end, and you have to carefully consider the cost/benefit of adopting a given technology.

That said, when things are massively moving in a certain direction, that has to mean something. And for some time already, things have been moving towards Python.

Want to level up your Python skills and stand out in a rapidly growing market? Check out SitePoint Premium! You'll find books to get you started (like The Python Apprentice) and develop job-ready skills (like Front-end Testing in Python). Enhance your skills with The Python Master, and access a growing library of over 400 books and courses on web design and development.

Hail the King

Practically every undergraduate IT class today is taught with Python — and not just computer science introduction courses offered by companies or by unversities. Even highly specialized courses on data science, AI, or quantitative finance — that not long ago would have used languages such as R, MATLAB, or C++ — are now also more often than not entirely taught in Python.

The post Trends in Python: What’s Hot in the Hottest Language Today appeared first on SitePoint.

Pair Programming: Benefits, Tips & Advice for Making it Work

Jan 22, 2020

Description:

Pair Programming: Tips and Advice for Making it Work

Pair Programming — a pair that's greater than the sum of its parts. You may have heard about pair programming and wondered whether it was worth trying in your workplace. On the surface it sounds simple, but two developers sitting together are not all that it takes to achieve productive pairing.

Logistical and personal hurdles such as scheduling, tool choices, and distractions can stop you from getting the most out of pairing. But the potential advantages can make it worth the trouble of recognizing and surmounting these challenges.

Why Pair?

How could it be more productive to take two programmers who were previously working on separate projects and have them work together on a single project? Won't everything take twice as long? To an outsider the idea of pairing may sound counterproductive at first, but the advantages become apparent when you start to think about why we code and what we're trying to accomplish.

Programming is not about churning out the most lines of code in the shortest amount of time, or even delivering the most features within increasingly tight deadlines. You can have engineers working around the clock pushing new features into production, but how productive are they really if those features are cranked out by individuals working in isolation according to their own unique understanding of the overall architecture? The resulting code is likely to be riddled with technical debt such as hidden bugs, performance issues, idiosyncratic syntax, and inefficient designs that may not use resources efficiently and may make it that much more difficult and time consuming to modify the code when one of those flaws surfaces.

You need your code to be meaningful and well written so that it works together seamlessly and can be modified easily. You need it to encapsulate the desired functionality so that your end product behaves properly and performs as expected. You need it to be resilient so it can withstand organizational changes that are a natural part of working together, as well as environmental changes and new customer expectations that may make today's workable solution obsolete without much warning.

In order to make that possible, developers need to be able to agree about fundamental requirements clearly, get up to speed quickly with whatever new or established technologies may be required, and focus without interruption to test out creative solutions and develop a product that's worth putting in front of the customer.

These are the real-world challenges that pair programming helps to address. When two developers work together in a pair, the quality of the code they produce improves along with their shared understanding of how it works. This makes it easier for the next person who reads the code to pick it up and modify it when necessary, and it reduces the danger that the only person on the team who knows how part of the code works may win the lottery and leave the team, taking that precious knowledge with them.

The time cost in mythical work hours is nowhere near the 50% that may seem intuitive if you tried to to equate the intricate art of coding with repetitive assembly line work. Some empirical studies have concluded that pair programming might result in about a 15% increase in the time it takes two programmers to accomplish the same tasks had they been working alone, but the resulting code will also be of much higher quality, with about 15% fewer observable defects to fix. Combine this with the shared ownership, deeper engagement, and faster problem solving that comes from having more than one mind engaged in solving a problem, and it's clear why pair programming is a popular approach.

What Exactly is Pairing?

So what does it take for two developers working together to achieve the productivity and quality improvements that come from pairing? It's mostly a matter of learning how to work collaboratively, which is not necessarily the way most of us learned to code.

By definition, pair programming doesn't start until you have two people working together on one computer. But how does that work in practice?

Two People …

The fundamental element of pair programming is working together with your pair. When a task is accepted, it needs to be shared between both of the people working on it, and they both need to be fully engaged in the task while they’re pairing on it. That means that they both need to understand the requirements the same way, and work together to come to a shared understanding of how they want to go about meeting them.

Pairing helps people get better at verbalizing their ideas and expectations. The implicit understanding you have in your head when you're working alone needs to be communicated so both you and your pair know you're on the same page. Getting as explicit as possible about the work and the approach up front will help make the pairing experience much more agreeable. Pairing involves a lot of talking, as that's the best way to keep two minds actively engaged in the problem at the same time.

For this reason, pairing is often associated with agile story writing, in which requirements for a feature are defined in consistent, plain language that can be understood equally well by Product and Engineering people with little room for ambiguity. Often pairs will ask for stories to be spelled out in Gherkin, which is a way of using common, non-technical phrases that are easy to translate into automated tests, so the pair can verify and demonstrate that each feature works as expected.

Writing in Gherkin means taking a feature and breaking it down into a simple story about a customer who wants something that this feature will deliver:

As <a customer of the product> I want <something desirable> So that <I can achieve a specific goal>

Then all the acceptance criteria are written out in a consistent syntax, defining the anticipated permutations and scenarios associated with that story:

Given <a customer in a particular state> When <something specific happens> Then <there is a specific outcome> Given <a customer in a particular state> When <something different happens> Then <there is a different specific outcome> etc.

Of course, it's not mandatory to use this exact phrasing, but if the requirements of a feature can't be expressed in this minimalist way, it's possible that the expectations are ambiguous. That's a potential red flag that's easier for a pair of programmers to spot when they start to discuss what's needed.

As soon as a pair accepts a story to work on, they should be able to define how they will know they are done and how they're going to prove it. From there, they can start to figure out together how best to approach the job.

In fact, the pair working on a feature should know enough up front that they could start by writing an automated test based on the first acceptance criterion before writing any code, making sure the new test fails, and then writing just enough code to make that test pass before refactoring and then starting on the next acceptance criterion. This approach is known as behavior-driven development, and while it’s not part of the definition of pair programming, it harmonizes beautifully, along with test-driven development.

The post Pair Programming: Benefits, Tips & Advice for Making it Work appeared first on SitePoint.

Using MySQL with Node.js and the mysql JavaScript Client

Jan 20, 2020

Description:

Using MySQL with Node.js and the mysql JavaScript Client

NoSQL databases are rather popular among Node developers, with MongoDB (the "M" in the MEAN stack) leading the pack. When starting a new Node project, however, you shouldn't just accept Mongo as the default choice. Rather, the type of database you choose should depend on your project's requirements. If, for example, you need dynamic table creation, or real-time inserts, then a NoSQL solution is the way to go. If your project deals with complex queries and transactions, on the other hand, an SQL database makes much more sense.

In this tutorial, we'll have a look at getting started with the mysql module — a Node.js client for MySQL, written in JavaScript. I'll explain how to use the module to connect to a MySQL database and perform the usual CRUD operations, before looking at stored procedures and escaping user input.

Quick Start: How to Use MySQL in Node

If you've arrived here looking for a quick way to get up and running with MySQL in Node, we've got you covered!

Here's how to use MySQL in Node in five easy steps:

Create a new project: mkdir mysql-test && cd mysql-test. Create a package.json file: npm init -y. Install the mysql module: npm install mysql. Create an app.js file and copy in the snippet below (editing the placeholders as appropriate). Run the file: node app.js. Observe a “Connected!” message. const mysql = require('mysql'); const connection = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password', database: 'database name' }); connection.connect((err) => { if (err) throw err; console.log('Connected!'); }); Installing the mysql Module

Now let's take a closer look at each of those steps.

mkdir mysql-test cd mysql-test npm init -y npm install mysql

First of all we're using the command line to create a new directory and navigate to it. Then we're creating a package.json file using the command npm init -y. The -y flag means that npm will use defaults without going through an interactive process.

This step also assumes that you have Node and npm installed on your system. If this is not the case, then check out this SitePoint article to find out how to do that: Install Multiple Versions of Node.js using nvm.

After that, we're installing the mysql module from npm and saving it as a project dependency. Project dependencies (as opposed to devDependencies) are those packages required for the application to run. You can read more about the differences between the two here.

If you need further help using npm, then be sure to check out this guide, or ask in our forums.

Getting Started

Before we get on to connecting to a database, it's important that you have MySQL installed and configured on your machine. If this is not the case, please consult the installation instructions on their home page.

The next thing we need to do is to create a database and a database table to work with. You can do this using a
graphical interface, such as Adminer, or using the command line. For this article I'll be using a database called sitepoint and a table called authors. Here's a dump of the database, so that you can get up and running quickly if you wish to follow along:

CREATE DATABASE sitepoint CHARACTER SET utf8 COLLATE utf8_general_ci; USE sitepoint; CREATE TABLE authors ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(50), city varchar(50), PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ; INSERT INTO authors (id, name, city) VALUES (1, 'Michaela Lehr', 'Berlin'), (2, 'Michael Wanyoike', 'Nairobi'), (3, 'James Hibbard', 'Munich'), (4, 'Karolina Gawron', 'Wrocław');

Using MySQL with Node.js & the mysql JavaScript Client

Connecting to the Database

Now, let's create a file called app.js in our mysql-test directory and see how to connect to MySQL from Node.js.

const mysql = require('mysql'); // First you need to create a connection to the database // Be sure to replace 'user' and 'password' with the correct values const con = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password', }); con.connect((err) => { if(err){ console.log('Error connecting to Db'); return; } console.log('Connection established'); }); con.end((err) => { // The connection is terminated gracefully // Ensures all remaining queries are executed // Then sends a quit packet to the MySQL server. });

Now open up a terminal and enter node app.js. Once the connection is successfully established you should be able to see the “Connection established” message in the console. If something goes wrong (for example, you enter the wrong password), a callback is fired, which is passed an instance of the JavaScript Error object (err). Try logging this to the console to see what additional useful information it contains.

Using nodemon to Watch the Files for Changes

Running node app.js by hand every time we make a change to our code is going to get a bit tedious, so let's automate that. This part isn't necessary to follow along with the rest of the tutorial, but will certainly save you some keystrokes.

Let's start off by installing a the nodemon package. This is a tool that automatically restarts a Node application when file changes in a directory are detected:

npm install --save-dev nodemon

Now run ./node_modules/.bin/nodemon app.js and make a change to app.js. nodemon should detect the change and restart the app.

Note: we're running nodemon straight from the node_modules folder. You could also install it globally, or create an npm script to kick it off.

Executing Queries Reading

Now that you know how to establish a connection to a MySQL database from Node.js, let's see how to execute SQL queries. We'll start by specifying the database name (sitepoint) in the createConnection command:

const con = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password', database: 'sitepoint' });

Once the connection is established, we'll use the con variable to execute a query against the database table authors:

con.query('SELECT * FROM authors', (err,rows) => { if(err) throw err; console.log('Data received from Db:'); console.log(rows); });

When you run app.js (either using nodemon or by typing node app.js into your terminal), you should be able to see the data returned from the database logged to the terminal:

[ RowDataPacket { id: 1, name: 'Michaela Lehr', city: 'Berlin' }, RowDataPacket { id: 2, name: 'Michael Wanyoike', city: 'Nairobi' }, RowDataPacket { id: 3, name: 'James Hibbard', city: 'Munich' }, RowDataPacket { id: 4, name: 'Karolina Gawron', city: 'Wrocław' } ]

Data returned from the MySQL database can be parsed by simply looping over the rows object.

rows.forEach( (row) => { console.log(`${row.name} lives in ${row.city}`); });

This gives you the following:

Michaela Lehr lives in Berlin Michael Wanyoike lives in Nairobi James Hibbard lives in Munich Karolina Gawron lives in Wrocław

The post Using MySQL with Node.js and the mysql JavaScript Client appeared first on SitePoint.

React with TypeScript: Best Practices

Jan 16, 2020

Description:

React with TypeScript: Best Practices

React and TypeScript are two awesome technologies used by a lot of developers these days. Knowing how to do things can get tricky, and sometimes it's hard to find the right answer. Not to worry. We've put together the best practices along with examples to clarify any doubts you may have.

Let's dive in!

How React and TypeScript Work Together

Before we begin, let's revisit how React and TypeScript work together. React is a "JavaScript library for building user interfaces", while TypeScript is a "typed superset of JavaScript that compiles to plain JavaScript." By using them together, we essentially build our UIs using a typed version of JavaScript.

The reason you might use them together would be to get the benefits of a statically typed language (TypeScript) for your UI. This means more safety and fewer bugs shipping to the front end.

Does TypeScript Compile My React Code?

A common question that’s always good to review is whether TypeScript compiles your React code. The way TypeScript works is similar to this interaction:

TS: "Hey, is this all your UI code?"
React: "Yup!"
TS: "Cool! I'm going to compile it and make sure you didn't miss anything."
React: "Sounds good to me!"

So the answer is yes, it does! But later, when we cover the tsconfig.json settings, most of the time you'll want to use "noEmit": true. What this means is TypeScript will not emit JavaScript out after compilation. This is because typically, we're just utilizing TS to do our TypeScript.

The output is handled, in a CRA setting, by react-scripts. We run yarn build and react-scripts bundles the output for production.

To recap, TypeScript compiles your React code to type-check your code. It doesn’t emit any JavaScript output (in most scenarios). The output is still similar to a non-TypeScript React project.

Can TypeScript Work with React and webpack?

Yes, TypeScript can work with React and webpack. Lucky for you, the official TypeScript Handbook has a guide on that.

Hopefully, that gives you a gentle refresher on how the two work together. Now, on to best practices!

Best Practices

We've researched the most common questions and put together this handy list of the most common use cases for React with TypeScript. This way, you can follow best practices in your projects by using this article as a reference.

Configuration

One of the least fun, yet most important parts of development is configuration. How can we set things up in the shortest amount of time that will provide maximum efficiency and productivity? We'll discuss project setup including:

tsconfig.json ESLint Prettier VS Code extensions and settings.

Project Setup

The quickest way to start a React/TypeScript app is by using create-react-app with the TypeScript template. You can do this by running:

npx create-react-app my-app --template typescript

This will get you the bare minimum to start writing React with TypeScript. A few noticeable differences are:

the .tsx file extension the tsconfig.json the react-app-env.d.ts

The tsx is for "TypeScript JSX". The tsconfig.json is the TypeScript configuration file, which has some defaults set. The react-app-env.d.ts references the types of react-scripts, and helps with things like allowing for SVG imports.

tsconfig.json

Lucky for us, the latest React/TypeScript template generates tsconfig.json for us. However, they add the bare minimum to get started. We suggest you modify yours to match the one below. We've added comments to explain the purpose of each option as well:

{ "compilerOptions": { "target": "es5", // Specify ECMAScript target version "lib": [ "dom", "dom.iterable", "esnext" ], // List of library files to be included in the compilation "allowJs": true, // Allow JavaScript files to be compiled "skipLibCheck": true, // Skip type checking of all declaration files "esModuleInterop": true, // Disbles namespace imports (import * as fs from "fs") and enables CJS/AMD/UMD style imports (import fs from "fs") "allowSyntheticDefaultImports": true, // Allow default imports from modules with no default export "strict": true, // Enable all strict type checking options "forceConsistentCasingInFileNames": true, // Disallow inconsistently-cased references to the same file. "module": "esnext", // Specify module code generation "moduleResolution": "node", // Resolve modules using Node.js style "resolveJsonModule": true, // Include modules imported with .json extension "isolatedModules": true, // Transpile each file as a separate module "noEmit": true, // Do not emit output (meaning do not compile code, only perform type checking) "jsx": "react" // Support JSX in .tsx files "sourceMap": true, // *** Generate corrresponding .map file *** "declaration": true, // *** Generate corresponding .d.ts file *** "noUnusedLocals": true, // *** Report errors on unused locals *** "noUnusedParameters": true, // *** Report errors on unused parameters *** "experimentalDecorators": true // *** Enables experimental support for ES decorators *** "incremental": true // *** Enable incremental compilation by reading/writing information from prior compilations to a file on disk *** "noFallthroughCasesInSwitch": true // *** Report errors for fallthrough cases in switch statement *** }, "include": [ "src/**/*" // *** The files TypeScript should type check *** ], "exclude": ["node_modules", "build"] // *** The files to not type check *** }

The additional recommendations come from the [react-typescript-cheatsheet community](https://github.com/typescript-cheatsheets/react-typescript-cheatsheet) and the explanations come from the Compiler Options docs in the Official TypeScript Handbook. This is a wonderful resource if you want to learn about other options and what they do.

ESLint/Prettier

In order to ensure that your code follows the rules of the project or your team, and the style is consistent, it's recommended you set up ESLint and Prettier. To get them to play nicely, follow these steps to set it up.

Install the required dev dependencies:

yarn add eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin eslint-plugin-react --dev

Create a .eslintrc.js file at the root and add the following:

module.exports = { parser: '@typescript-eslint/parser', // Specifies the ESLint parser extends: [ 'plugin:react/recommended', // Uses the recommended rules from @eslint-plugin-react 'plugin:@typescript-eslint/recommended', // Uses the recommended rules from @typescript-eslint/eslint-plugin ], parserOptions: { ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features sourceType: 'module', // Allows for the use of imports ecmaFeatures: { jsx: true, // Allows for the parsing of JSX }, }, rules: { // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs // e.g. "@typescript-eslint/explicit-function-return-type": "off", }, settings: { react: { version: 'detect', // Tells eslint-plugin-react to automatically detect the version of React to use }, }, };

Add Prettier dependencies:

yarn add prettier eslint-config-prettier eslint-plugin-prettier --dev

Create a .prettierrc.js file at the root and add the following:

module.exports = { semi: true, trailingComma: 'all', singleQuote: true, printWidth: 120, tabWidth: 4, };

Update the .eslintrc.js file:

module.exports = { parser: '@typescript-eslint/parser', // Specifies the ESLint parser extends: [ 'plugin:react/recommended', // Uses the recommended rules from @eslint-plugin-react 'plugin:@typescript-eslint/recommended', // Uses the recommended rules from the @typescript-eslint/eslint-plugin + 'prettier/@typescript-eslint', // Uses eslint-config-prettier to disable ESLint rules from @typescript-eslint/eslint-plugin that would conflict with prettier + 'plugin:prettier/recommended', // Enables eslint-plugin-prettier and displays prettier errors as ESLint errors. Make sure this is always the last configuration in the extends array. ], parserOptions: { ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features sourceType: 'module', // Allows for the use of imports ecmaFeatures: { jsx: true, // Allows for the parsing of JSX }, }, rules: { // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs // e.g. "@typescript-eslint/explicit-function-return-type": "off", }, settings: { react: { version: 'detect', // Tells eslint-plugin-react to automatically detect the version of React to use }, }, };

These recommendations come from a community resource written called "Using ESLint and Prettier in a TypeScript Project" by Robert Cooper. If you visit his blog, you can read more about the "why" behind these rules and configurations.

VSCode Extensions and Settings

We've added ESLint and Prettier and the next step to improve our DX is to automatically fix/prettify our code on save.

First, install the ESLint extension for VSCode. This will allow ESLint to integrate with your editor seamlessly.

Next, update your Workspace settings by adding the following to your .vscode/settings.json:

code block

This will allow VS Code to work its magic and fix your code when you save. It's beautiful!

These suggestions also come from the previously linked article "Using ESLint and Prettier in a TypeScript Project" by Robert Cooper.

The post React with TypeScript: Best Practices appeared first on SitePoint.

How Do Developers See Themselves? A Quantified Look

Jan 16, 2020

Description:

This article was originally published by SlashData. Thank you for supporting the partners who make SitePoint possible.

For the first time in our Q2 2019 Developer Economics survey, we tried to introduce developers in their own words by asking them about how they see themselves.

We provided a set of 21 words and asked them to choose up to five to form a word sketch of their personality. We also gave them the opportunity to provide their own text description.

Here’s what we got:

developers

Over half of the developers say they are logical

Perhaps unsurprisingly, nearly six out of ten developers say they are logical. And as it turns out this is the most popular choice of description across all software development sectors, except in games development. Next in line, but some way behind, are the descriptors team player and introvert at 37% each. By comparison, just 10% label themselves as an extrovert. But can you guess which programmers consider themselves less introverted? Those involved in the AR/VR and IoT sector. Interesting, right?

Moving on to a slightly more unusual pair of labels: there are slightly more dog lovers than cat people in the developer population, although the numbers are close at 15% and 13% respectively. A much greater difference seems to exist though between developers working at night (night owls, 29%) and those who prefer the fresh morning breeze (early birds, 14%).

developer

What about hobbies and spare time?

A third (33%) of developers say they are a reader, which makes it the most popular choice among spare-time activities. It is closely followed by 31% who say they are a gamer. Our data shows that developers tend to perceive themselves differently as they grow older. More than one in three developers up to the age of 34 years consider themselves to be a gamer, compared to fewer than one in four of the 35-44 age group, and fewer than one in five of the 45-54-years. Older programmers are more likely to describe themselves as readers.

“What’s this “real life” you’re talking about like? Is it similar to WoW? Does it run on a 64 bit OS?”

Other activities such as music and sport score lower, at 20% and 17%. A low 7% make LEGO models, although the popularity of LEGO seems to be very much dependent upon age. A respectable 12% of developers under 18 make LEGO models, but the proportion halves to 6% within the age group 18-24.

What about the artistic ones?

Even though a developer’s work demands a high level of creativity, just 14% use “artistic” to describe themselves. Those involved in games or in augmented reality and virtual reality development are far more likely than others to use this word to describe themselves. 21% of game developers and about 25% of AR/VR developers see themselves as artistic, as compared to 16% or less of desktop, web and backend developers.

Lastly, in out Q2 2019 Developer Economics survey, a few programmers were confused as to why we were asking the question and pondered if we were trying to set up a dating site. Well, we weren’t! We were collecting the data to create the State of the Developer Nation Report, 17th Edition.

Interested in joining forces with 40,000 developers worldwide in shaping the future of the developer ecosystem? Take our survey.

The post How Do Developers See Themselves? A Quantified Look appeared first on SitePoint.

Learn End-to-end Testing with Puppeteer

Jan 16, 2020

Description:

End-to-end Testing with Puppeteer

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

In this tutorial, we’ll learn what testing is, the different types of testing, and then we’ll use Puppeteer to perform end-to-end testing on our application. By the end of this tutorial, you should be able to end-to-end test your apps easily with Puppeteer.

Prerequisites

For this tutorial, you need a basic knowledge of JavaScript, ES6+ and Node.js.

You must also have installed the latest version of Node.js.

We’ll be using yarn throughout this tutorial. If you don’t have yarn already installed, install it from here.

You should also know the basics of Puppeteer. To understand the basics of Puppeteer, check out this simple tutorial.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 13.3.0 npm 6.13.2 yarn 1.21.1 puppeteer 2.0.0 create-react-app 3.3.0 Introduction to Testing

In simple terms, testing is a process to evaluate the application works as expected. It helps in catching bugs before your application gets deployed.

There are four different types of testing:

Static Testing: uses a static type system like TypeScript, ReasonML, Flow or a linter like ESLint. This helps in capturing basic errors like typos and syntax. Unit Testing: the smallest part of an application, also known as a unit, is tested. Integration Testing: multiple related units are tested together to see if the application works perfectly in combination. End-to-end Testing: the entire application is tested from start to finish, just like a regular user would, to see if it behaves as expected.

The testing trophy by Kent C Dodds is a great visualization of the different types of testing:

Testing Trophy - Kent C Dodds

The testing trophy should be read bottom-to-top. If you perform these four levels of testing, you can be confident enough with the code you ship.

Now let’s perform end-to-end testing with Puppeteer.

End-to-end Testing with Puppeteer

Let's bootstrap a new React project with create-react-app, also known as CRA. Go ahead and type the following in the terminal:

$ npx create-react-app e2e-puppeteer

This will bootstrap a new React project in a e2e-puppeteer folder. Thanks to the latest create-react-app version, this will also install testing-library by default so we can test our applications easily.

Go inside the e2e-puppeteer directory and start the server by typing the following in the terminal:

$ cd e2e-puppeteer $ yarn start

It should look like this:

React Init

Our App.js looks like this:

import React from 'react'; import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} className="App-logo" alt="logo" /> <p> Edit <code>src/App.js</code> and save to reload. </p> <a className="App-link" href="https://reactjs.org" target="_blank" rel="noopener noreferrer" > Learn React </a> </header> </div> ); } export default App;

We’ll be testing the App.js function and the code will be written in App.test.js. So go ahead and open up App.test.js. It should have the following content:

import React from 'react'; import { render } from '@testing-library/react'; // 1 import App from './App'; test('renders learn react link', () => { // 2 const { getByText } = render(<App />); // 3 const linkElement = getByText(/learn react/i); // 4 expect(linkElement).toBeInTheDocument(); // 5 });

Here's what's happening in the code above:

We import the render function from the @testing-library/react package. We then use the global test function from Jest, which is our test runner installed by default through CRA. The first parameter is a string which describes our test, and the second parameter is a function where we write the code we want to test. Next up, we render the App component and destructure a method called getByText, which searches for all elements that have a text node with textContent. Then, we call the getByText function with the text we want to check. In this case, we check for learn react with the case insensitive flag. Finally, we make the assertion with the expect function to check if the text exists in the DOM.

This comes by default when we bootstrap with CRA. Go ahead and open up another terminal and type the following:

$ yarn test

When it shows a prompt, type a to run all the tests. You should now see this:

React Init Test

Now let's test this application with end-to-end testing.

Testing the Boilerplate with Puppeteer

Go ahead and install puppeteer as a dev dependency by typing the following in the terminal:

$ yarn add -D puppeteer

Now open up App.test.js and paste the following:

import puppeteer from "puppeteer"; // 1 let browser; let page; // 2 beforeAll(async () => { browser = await puppeteer.launch({ headless: false }); page = await browser.newPage(); await page.goto("http://localhost:3000/"); }); // 3 test("renders learn react link", async () => { await page.waitForSelector(".App"); const header = await page.$eval(".App-header>p", e => e.innerHTML); expect(header).toBe(`Edit <code>src/App.js</code> and save to reload.`); const link = await page.$eval(".App-header>a", e => { return { innerHTML: e.innerHTML, href: e.href }; }); expect(link.innerHTML).toBe(`Learn React`); expect(link.href).toBe("https://reactjs.org/"); }); // 4 afterAll(() => { browser.close(); });

This is what we're doing in the code above:

Firstly, we import the puppeteer package and declare some global variables, browser and page. Then we have the beforeAll function provided by Jest. This runs before all tests are run. Here, we launch a new Chromium browser by calling puppeteer.launch(), while setting headless mode to false so we see what's happening. Then, we create a new page by calling browser.newPage() and then go to our React application's URL http://localhost:3000/ by calling the page.goto() function. Next up, we wait for the .App selector to load. When it loads, we get the innerHTML of .App-header>p selector by using the page.$eval() method and compare it with Edit src/App.js and save to reload.. We do the same thing with the .App-header>a selector. We get back innerHTML and href and then we compare them with Learn React and https://reactjs.org/ respectively to test our assertion with Jest's expect() function. Finally, we call the afterAll function provided by Jest. This runs after all tests are run. Here, we close the browser.

This test should automatically run and give you the following result:

E2E Test Puppeteer Basic

Let's go ahead and make a counter app.

The post Learn End-to-end Testing with Puppeteer appeared first on SitePoint.

15 Top WordPress Themes to Use in 2020

Jan 14, 2020

Description:

15 Top WordPress Themes to Use in 2020

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Overworked, overstressed, and flat out fed up with starting every website design from scratch? Here are some WordPress theme solutions you’ll appreciate.

Maybe you need to switch to an easy-to-use theme — a WordPress theme that’s crazy-fast and gives you reliable performance may be your cup of tea.

Tired of having to build your websites from scratch? It’s totally unnecessary unless for some reason you absolutely want to.

Before you blame yourself for the situation you find yourself in, consider this: maybe it’s the tools you’re using. You may be trying to build a house without the use of power tools, scaffolding, or helpful aids.

One of the following 15 top WordPress themes should prove to be the solution to your problem. In fact, more than one of them could probably serve quite nicely.

Grab a cup of coffee and let’s get started.

1. BeTheme: Responsive, Multi-purpose WordPress Theme

BeTheme: Responsive, Multi-purpose WordPress Theme

This biggest-of-them-all multipurpose WordPress theme can’t be beaten in terms of the huge array of “power” tools and design elements it places at your disposal. BeTheme is fast and flexible. It’s easy for beginners to work with. If trying to satisfy multiple clients has become more stressful than rewarding, BeTheme has a solution for that as well.

Be’s selection of 500+ customizable, responsive pre-built websites is the highlight and a proven stress reducer. These professionally crafted, pre-built websites cover 30 industry sectors, all the common websites, and an impressive range of business niches.

They also have UX features and functionalities built into them, potentially saving you a ton of design time.

BeTheme uses the popular Muffin Builder 3 page builder, with WPBakery as an option. There’s a Layouts Configurator if you really want to, or absolutely have to, build a page from scratch. It has a Shortcode Generator and a large selection of shortcodes that, together with Be’s drag and drop features, eliminates the need for coding. Be’s powerful Admin Panel provides unmatched flexibility.

I have purchased 4 of these themes at this point. Love the speed and build of them. Only wish list item would be a way to categorize and tag pages like you can with posts. — sharkyh2o

Click here and browse Be’s impressive collection of pre-built websites.

2. Total Theme

Total Theme

Total is another stress-reducing theme. This flexible and easy-to-use WordPress theme has been around for a while and has amassed a user base of 41,000 happy customers.

Total is drag and drop and it doesn’t require coding to build exactly the type of website you have in mind. Total is also developer friendly thanks to its system of hooks, filters, and snippets. There are more than 500 advanced customizing options available, plus 100+ page-builder elements and design modules to work with and 40+ pre-built demos to get any project off to a solid start. You won’t be burdened by third-party plugins either, since this WooCommerce-ready theme is compatible with all WordPress plugins. Very Friendly Very Simple Clean Code Good Flexibility Cool Elements Excelent custom panel Good integration with WooCommerce

Love this theme, it can do everything I need including shops, in a very good and easy way. — soswebdesign

Click here to discover if Total is the solution you’ve been looking for.

3. Avada

Avada

If you choose a best-selling theme, chances are it’s going to relieve rather than add to any stress you may be encountering. Avada is such a theme.

Its Dynamic Content System provides unmatched flexibility. Avada integrates totally with WooCommerce and includes product design drag and drop capabilities. 55+ pre-built websites are included to get you off to a fast start.

Great theme! As my first WordPress theme, it offers many options and continues to improve! — nwilger

Click here to find out more about this best-seller.

4. TheGem: Creative, Multi-Purpose, High-Performance WordPress Theme

TheGem

Featuring the most beautiful designs for WordPress is what many web designers will tell you about TheGem. What really gets them excited, however, are the tools that come with the package.

Those same designers will tell you that TheGem is the ultimate WordPress toolbox. To name but just a few of the goodies, you’ll find:

plenty of pre-built, one-click installable websites over 400 modern and trendy design templates a ready-to-go fashion store

Great theme and great service. — bepreoo

Your very own ultimate toolbox is just a click or two away.

5. Uncode: Creative, Multiuse WordPress Theme

Uncode

Bloggers, freelancers, and creatives of all types, plus small businesses and agencies, will benefit from making this ThemeForest bestseller with its 60K+ sales their theme of choice. This is doubly true if you need to create a portfolio or magazine-style website or any type or style of a page.

Features include:

a powerful front-end editor adaptive image and advanced grid systems WooCommerce compatibility and single product design and display features.

The star of the show is Uncode’s showcase of user-created websites. They tell a story of what Uncode could do for you, plus they are a source of inspiration.

Nice code, good support, design possibilities are endless. — zoutmedia

Visit Uncode and browse its showcase of user-built websites.

6. Houzez: Highly Customizable Real Estate WordPress Theme

Houzez

There are some website types that a multi-purpose theme simply can’t help you with — usually because of unique and special features that are required. For the realestate sector, as an example, using a theme like Houzez is a must. Houzez’ unique functionalities include:

advanced property searching flexible property listings formatting a property management system

In addition, this drag and drop theme can easily be customized to match a realtor’s business model.

I really love the function and the appearance of the theme. — stuffmartusa2

If you happen to have a realtor for a client, look no further.

The post 15 Top WordPress Themes to Use in 2020 appeared first on SitePoint.

How Four Programmers Got Their First Python Jobs

Jan 13, 2020

Description:

How Four Programmers Got Their First Python Jobs

No one really knows how to do a job before they do it. Most people land a coveted position through a strange alchemy of related experience, networking, and hard work. The real experience is the job itself. That’s when you get the opportunity to apply what you know to real-world problems and see it pay off.

The following four programmers earned their first Python jobs in different ways. Some had prior Python experience, some didn’t. Some knew what they were getting into, others found out later. Understanding how they landed their first Python job might help you land yours. Here’s how they did it.

Nathan Grieve

First Python job: Data Scientist

How Nathan Got the Job

While completing my Physics degree, I applied for a data science job with a small tech startup that primarily used Python (and SQL). The thing is, I didn’t have experience with Python at the time. When the interview came around, I answered the programming questions by using pseudocode to demonstrate I understood the concepts.

Pseudocode uses coding logic without using coding syntax. So by using the same logic that Python does, I could show an understanding of the concepts without being specific to any language.

For example, any computer scientist can understand the simple pseudocode below, but they may not understand the Python function unless they've worked with it before.

Python

loop_index = 0 while loop_index < 5: print(loop_index) loop_index += 1

Pseudocode

Set loop index to 0 Loop while loop index is less than 5 print loop index Increase loop index by 1

Pseudocode is more readable to humans, too. It’s not actually much different from code, it just avoids using language-specific syntax. And using it it worked! They gave me the job. But of course, before I arrived I had to actually learn the language.

Nathan's Advice

My advice for those wanting to enter the field is to tackle real-world problems as soon as you can. At Project Hatch, a company I cofounded that analyzes startups and provides them with analytics to grow their businesses, we do hire people who are self taught, but there's a huge skill gap between those who only do Codecademy-style courses and those who actually apply their knowledge. I would say keep working through Codewars challenges until you’re at a point where you don’t have to repeatedly look up what arguments you should be using and what order they should be used in.

If you’re looking for real-world problems to solve, go on Kaggle, which has a huge number of data sets to play with, and practice pulling useful information out of them. For example, if you’re looking at a data set for food recipes, align the data set with local food prices to find all of the recipes that create meals for under $5. When you’re ready for a real challenge, try Kaggle competitions. You'll find problems to solve and companies willing to pay. These challenges will be incredibly difficult to begin with, but you'll learn a lot discussing solutions with other computer scientists on the forum.

Bill Price

First Python job: Cyber Security Architect

How Bill Got the Job

I had supported Python developers for a number of years as a NASA network administrator and security engineer, so I was aware of the power and flexibility of the language before a new opportunity presented itself.

In 2017, I was approached by a major financial institution to join a team charged with developing a new assessment program to identify monitoring gaps in a particular business process and its supporting applications. I believe they came to me because of my:

network and security experience lack of experience in the financial sector, as they wanted a fresh set of technical eyes on their problem ability to tease out what actual requirements are ability to approach a new project with an open mind and no preconceived notions.

Funnily enough, and unbeknownst to me, this turned out to be my first Python job.

Our team was expected to triage the gaps, identify possible mitigations, and report our findings to leadership. We began by mapping applications to each business process, but quickly realized that the size of the different data sets we needed to review (application and hardware inventories, Qualys vulnerability scans, daily BladeLogic reports, Splunk logs, etc.) were too large for import into Excel spreadsheets. Furthermore, we didn't have access to traditional UNIX text processing resources or administrative access to our workstation, where we might have installed any new data management tools. And we didn’t have the budget to purchase new tools.

We did, however, have access to Python, a full set of Python libraries, and the ability to install Python using existing enterprise support software.

I didn’t know Python going in. I had to learn on the job, and good thing I did. Python was critical in our being able to parse hardware inventories based on applications used by the business process, isolate vulnerabilities associated with the appropriate hardware, and identify unauthorized services running on any device that supported one (or more) applications.

Bill’s Advice

My advice to aspiring Python developers is threefold.

First, familiarize yourself with the different libraries available in Python that might assist you in a potential job. Our team used mechanize, cookielib, urlib, urlib2, and csv extensively. If you're looking at a machine-learning project, pay attention to libraries like TensorFlow, Numpy, and Keras.

Next, be on the lookout for processes that need to be automated, or where existing automation can be improved. There's likely an opportunity for applying Python.

Lastly, have a good Python reference book to supplement all of the online resources that are available. I recommend T.J. O'Connor's Violent Python.

The post How Four Programmers Got Their First Python Jobs appeared first on SitePoint.

4 Reasons to Use Image Processing to Optimize Website Media

Jan 10, 2020

Description:

4 Reasons to Use Image Processing to Optimize Website Media

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Image optimization is a big deal when it comes to website performance. You might be wondering if you’re covering all the bases by simply keeping file size in check. In fact, there’s a lot to consider if you truly want to optimize your site’s images.

Fortunately, there are image processing tools and content delivery networks (CDNs) available that can handle all the complexities of image optimization. Ultimately, these services can save you time and resources, while also covering more than one aspect of optimization.

In this article, we’ll take a look at the impact image optimization can have on site performance. We’ll also go over some standard approaches to the problem, and explore some more advanced image processing options. Let’s get started!

Why Skimping on Image Optimization Can Be a Performance Killer

If you decide not to optimize your images, you’re essentially tying a very heavy weight to all of your media elements. All that extra weight can drag your site down a lot. Fortunately, optimizing your images trims away the unnecessary data your images might be carrying around.

If you’re not sure how your website is currently performing, you can use an online tool to get an overview.

Results of a website speed test

Once you have a better picture of what elements on your website are lagging or dragging you down, there are a number of ways you can tackle image optimization specifically, including:

Choosing appropriate image formats. There are a number of image formats to choose from, and they each have their strengths and weaknesses. In general, it’s best to stick with JPEGs for photographic images. For graphic design elements, on the other hand, PNGs are typically superior to GIFs. Additionally, new image formats such as Google’s WebP have promising applications, which we’ll discuss in more detail later on. Maximizing compression type. When it comes to compression, the goal is to get each image to its smallest “weight” without losing too much quality. There are two kinds of compression that can do that: “lossy” and “lossless”. A lossy image will look similar to the original, but with some decrease in quality, whereas a lossless image is nearly indistinguishable from the original but also heavier. Designing with the image size in mind. If you’re working with images that need to display in a variety of sizes, it’s best to provide all the sizes you’ll need. If your site has to resize them on the fly, that can negatively impact speeds. Exploring delivery networks. CDNs can be a solution to more resource-heavy approaches for managing media files. A CDN can handle all of your image content, and respond to a variety of situations to deliver the best and most optimized files.

As with any technical solution, you’ll have to weigh the pros and cons of each approach. However, it’s also worth noting that these more traditional approaches aren’t the only options you have available to you.

4 Reasons to Use Image Processing for Optimizing Your Website’s Media

As we mentioned above, CDNs are one possible way to solve image performance conundrums on your website. One example of the services a CDN can provide is found in KeyCDN’s image processing.

This particular service is a real-time image processing and delivery option. This means it can detect how a user is viewing your site, and provide the optimal image type for that use case. Let’s look at four reasons this can be a very effective feature.

1. You Can Convert Your Images to Advanced Formats

We’ve already discussed how PNG and JPEG are the most common and recommended formats for graphic and photographic elements respectively. You might not know, however, that there’s a new file format available that might be beneficial when you’re looking to boost your site’s performance.

We’re talking about WebP, which is Google’s new, modern image file format.

webp logoThe WebP logo. Source: WikiMedia Commons

The WebP format can work with both lossy and lossless compression, and supports transparency. Plus, the files themselves hold a lot of potential when it comes to optimization and performance.

This is because WebP lossless files are up to 26% smaller than PNGs of equivalent quality. In fact, KeyCDN did a study to compare just how much of an impact the WebP format can have. It found an overall 77 percent decrease in page size when converting from JPG to WebP.

Consequently, KeyCDN offers conversion to WebP. This feature uses lossless compression, and the most appropriate image can then be served up to each user based on browser specifications and compatibility.

In addition to conversion, there’s also a WebP Caching feature that offers a one-click solution for existing users. Without changing anything else, KeyCDN users can easily take advantage of WebP images via this option.

The post 4 Reasons to Use Image Processing to Optimize Website Media appeared first on SitePoint.

JavaScript’s New Private Class Fields, and How to Use Them

Jan 10, 2020

Description:

JavaScript's New Private Class Fields, and How to Use Them

ES6 introduced classes to JavaScript, but they can be too simplistic for complex applications. Class fields (also referred to as class properties) aim to deliver simpler constructors with private and static members. The proposal is currently a TC39 stage 3: candidate and is likely to be added to ES2019 (ES10). Private fields are currently supported in Node.js 12, Chrome 74, and Babel.

A quick recap of ES6 classes is useful before we look at how class fields are implemented.

ES6 Class Basics

JavaScript's object-oriented inheritance model can confuse developers coming from languages such as C++, C#, Java, and PHP. For this reason, ES6 introduced classes. They are primarily syntactical sugar but offer more familiar object-oriented programming concepts.

A class is an object template which defines how objects of that type behave. The following Animal class defines generic animals (classes are normally denoted with an initial capital to distinguish them from objects and other types):

class Animal { constructor(name = 'anonymous', legs = 4, noise = 'nothing') { this.type = 'animal'; this.name = name; this.legs = legs; this.noise = noise; } speak() { console.log(`${this.name} says "${this.noise}"`); } walk() { console.log(`${this.name} walks on ${this.legs} legs`); } }

Class declarations always execute in strict mode. There's no need to add 'use strict'.

The constructor method is run when an object of the Animal type is created. It typically sets initial properties and handles other initializations. speak() and walk() are instance methods which add further functionality.

An object can now be created from this class with the new keyword:

let rex = new Animal('Rex', 4, 'woof'); rex.speak(); // Rex says "woof" rex.noise = 'growl'; rex.speak(); // Rex says "growl" Getters and Setters

Setters are special methods used to define values only. Similarly, Getters are special methods used to return a value only. For example:

class Animal { constructor(name = 'anonymous', legs = 4, noise = 'nothing') { this.type = 'animal'; this.name = name; this.legs = legs; this.noise = noise; } speak() { console.log(`${this.name} says "${this.noise}"`); } walk() { console.log(`${this.name} walks on ${this.legs} legs`); } // setter set eats(food) { this.food = food; } // getter get dinner() { return `${this.name} eats ${this.food || 'nothing'} for dinner.`; } } let rex = new Animal('Rex', 4, 'woof'); rex.eats = 'anything'; console.log( rex.dinner ); // Rex eats anything for dinner. Child or Sub-classes

It's often practical to use one class as the base for another. A Human class could inherit all the properties and methods from the Animal class using the extends keyword. Properties and methods can be added, removed, or changed as necessary so human object creation becomes easier and more readable:

class Human extends Animal { constructor(name) { // call the Animal constructor super(name, 2, 'nothing of interest'); this.type = 'human'; } // override Animal.speak speak(to) { super.speak(); if (to) console.log(`to ${to}`); } }

super refers to the parent class, so it’s usually the first call made in the constructor. In this example, the Human speak() method overrides that defined in Animal.

Object instances of Human can now be created:

let don = new Human('Don'); don.speak('anyone'); // Don says "nothing of interest" to anyone don.eats = 'burgers'; console.log( don.dinner ); // Don eats burgers for dinner.

The post JavaScript’s New Private Class Fields, and How to Use Them appeared first on SitePoint.

The Top 10 SitePoint Guides & Tutorials of 2019

Jan 9, 2020

Description:

In 2019, we published hundreds of new guides, tutorials, and articles. Whether we showed you how to use new technologies and tools, or published career advice from people at the top of their game, our aim was always to help you level up as a web developer.

Though tech moves fast, all of those articles are still relevant now in the start of 2020. To celebrate a year concluded, we wanted to take a look at the 10 pieces our readers enjoyed and shared the most in 2019. Hopefully, there's something here that's useful to you going into this new year.

What Is Functional Programming?

As a programmer, you probably want to write elegant, maintainable, scalable, predictable code. The principles of functional programming, or FP, can significantly aid in these goals. Ali Spittel walks you through these principles, using JavaScript to demonstrate them.

➤ Read What Is Functional Programming?

10 Must-have VS Code Extensions for JavaScript Developers

Visual Studio Code is undoubtedly the most popular lightweight code editor today. It does borrow heavily from other popular code editors, mostly Sublime Text and Atom. However, its success mainly comes from its ability to provide better performance and stability. In addition, it also provides much-needed features like IntelliSense, which were only available in full-sized IDEs like Eclipse or Visual Studio 2017.

The power of VS Code no doubt comes from the marketplace. Thanks to the wonderful open-source community, the editor is now capable of supporting almost every programming language, framework, and development technology. Support for a library or framework comes in various ways, which mainly includes snippets, syntax highlighting, Emmet and IntelliSense features for that specific technology.

➤ Read 10 Must-have VS Code Extensions for JavaScript Developers

Why the Highest Paid Developers "Fight" Their Co-workers

Most employees want to keep their jobs and their clients. They don’t have the leverage or control they want over their own careers. They need their job. In fact, most people are terrified of losing their jobs.

Research shows the fear of losing your job creates job dissatisfaction and a lack of commitment at work. This, in turn, affects job performance, negatively increasing the likelihood that you will lose your job. It’s a vicious cycle that seems to repeat itself over and over.

But there’s something worse than the fear of a job loss.

➤ Read Why the Highest Paid Developers "Fight" Their Co-workers

How to Tell If Vue.js Is the Right Framework for Your Next Project

Vue.js grew from a one-man project to a JavaScript framework everyone’s talking about. You’ve heard about it from your front-end colleagues and during conferences. You’ve probably read multiple comparisons between Vue, React, and Angular. And you’ve probably also noticed that Vue outranks React in terms of GitHub stars.

All that’s made you wonder whether Vue.js is the right framework for your next project? Well, let’s explore the possibilities and limitations of Vue to give you a high-level look at the framework and make your decision a little easier.

➤ Read How to Tell If Vue.js Is the Right Framework for Your Next Project

JavaScript Web Workers: A Beginner's Guide

Today’s mobile devices normally come with 8+ CPU cores, or 12+ GPU cores. Desktop and server CPUs have up to 16 cores, 32 threads, or more. In this environment, having a dominant programming or scripting environment that is single-threaded is a bottleneck.

JavaScript is single-threaded. This means that by design, JavaScript engines — originally browsers — have one main thread of execution, and, to put it simply, process or function B cannot be executed until process or function A is finished. A web page’s UI is unresponsive to any other JavaScript processing while it is occupied with executing something — this is known as DOM blocking.

The solution: web workers.

➤ Read JavaScript Web Workers: A Beginner's Guide

React vs Angular: An In-depth Comparison

Should I choose Angular or React? Each framework has a lot to offer and it’s not easy to choose between them. Whether you’re a newcomer trying to figure out where to start, a freelancer picking a framework for your next project, or an enterprise-grade architect planning a strategic vision for your company, you’re likely to benefit from having an educated view on this topic.

➤ Read React vs Angular: An In-depth Comparison

Fetching Data from a Third-party API with Vue.js and Axios

More often than not, when building your JavaScript application, you’ll want to fetch data from a remote source or consume an API. I recently looked into some publicly available APIs and found that there’s lots of cool stuff that can be done with data from these sources.

With Vue.js, you can literally build an app around one of these services and start serving content to users in minutes.

I’ll demonstrate how to build a simple news app that will show the top news articles of the day allow users to filter by their category of interest, fetching data from the New York Times API.

➤ Read Fetching Data from a Third-party API with Vue.js and Axios

How to Install Docker on Windows 10 Home

If you’ve ever tried to install Docker for Windows, you’ve probably come to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

In this tutorial, I’ll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine.

➤ Read How to Install Docker on Windows 10 Home

How to Use Windows Subsystem for Linux 2 and Windows Terminal

In this article, you’ll learn how you can use Windows Subsystem for Linux 2 to set up and run a local Linux shell interface in Windows without using a virtual machine. This not like using terminals such as Git Bash or cmder that have a subset of UNIX tools added to $PATH. This is actually like running a full Linux kernel on Windows that can execute native Linux applications. That’s pretty awesome, isn’t it?

➤ Read How to Use Windows Subsystem for Linux 2 and Windows Terminal

How to Migrate to Gulp.js 4.0

Despite competition from webpack and Parcel, Gulp.js remains one of the most popular JavaScript task runners. Gulp.js is configured using code which makes it a versatile, general-purpose option. As well as the usual transpiling, bundling and live reloading, Gulp.js could analyze a database, render a static site, push a Git commit, and post a Slack message with a single command.

➤ Read How to Migrate to Gulp.js 4.0

Happy New Year from SitePoint

We hope you all had a restful break and have come back recharged and ready to tackle your goals for this new year. We'll continue to collaborate with working developers to help you improve your skills this year, and we'll explore new areas that we hope you'll find both useful and exciting. And we'll continue our work on leveling SitePoint Premium up into a next-generation learning platform and comprehensive reference library. Happy New Year from SitePoint!

The post The Top 10 SitePoint Guides & Tutorials of 2019 appeared first on SitePoint.

How to Edit Source Files Directly in Chrome

Jan 7, 2020

Description:

How to Edit Source Files Directly in Chrome

A web developer's typical day involves creating HTML web pages with associated CSS and JavaScript in their favorite editor. The workflow:

Open the locally hosted page in a browser. Swear. Open DevTools to investigate the layout and functionality problems. Tweak the HTML elements, CSS properties, and JavaScript code to fix the issues. Copy those changes back into the editor and return to step #1.

While tools such as live reloading have made this process easier, many developers continue to tweak code in both DevTools and their editor.

However, it's possible to open and edit source files directly in Chrome. Any changes you make are saved to the file system and updated within the editor (presuming it refreshes when file changes occur).

Step 1: Launch Developer Tools

Open Chrome, load a page from your local file system/server and open Developer Tools from the More tools menu or press F12 or Ctrl/Cmd + Shift + I depending on your system. Navigate to the Sources tab to examine the file explorer:

Chrome DevTools Sources

You can open and edit CSS and JavaScript files in this view, but any changes will be lost as soon as you refresh the page.

Step 2: Associate a Folder with the Workspace

Click the Filesystem tab, then click + Add folder to workspace. You’ll be prompted to locate your work folder and Chrome will ask you to confirm that you Allow access. The explorer shows files on your system which can be opened with a single click:

Chrome DevTools file system

The post How to Edit Source Files Directly in Chrome appeared first on SitePoint.

How to Create Printer-friendly Pages with CSS

Jan 6, 2020

Description:

How to Create Printer-friendly Pages with CSS

In this article, we review the art of creating printer-friendly web pages with CSS.

"Who prints web pages?" I hear you cry! Relatively few pages will ever be reproduced on paper. But consider:

printing travel or concert tickets reproducing route directions or timetables saving a copy for offline reading accessing information in an area with poor connectivity using data in dangerous or dirty conditions — for example, a kitchen or factory outputting draft content for written annotations printing web receipts for bookkeeping purposes providing documents to those with disabilities who find it difficult to use a screen printing a page for your colleague who refuses to use this newfangled t'internet nonsense.

Unfortunately, printing pages can be a frustrating experience:

text can be too small, too large, or too faint columns can be too narrow, too wide, or overflow page margins sections may be cropped or disappear entirely ink is wasted on unnecessary colored backgrounds and images link URLs can't be seen icons, menus, and advertisements are printed which could never be clicked!

Many developers advocate web accessibility, yet few remember to make the printed web accessible!

Converting responsive, continuous media to paged paper of any size and orientation can be challenging. However, CSS print control has been possible for many years, and a basic style sheet can be completed within hours. The following sections describe well-supported and practical options for making your pages printer-friendly.

Print Style Sheets

Print CSS can either be:

Applied in addition to screen styling. Taking your screen styles as a base, the printer styles override those defaults as necessary. Applied as separate styles. The screen and print styles are entirely separate; both start from the browser's default styles.

The choice will depend on your site/app. Personally, I use screen styles as a print base most of the time. However, I have used separate style sheets for applications with radically different outputs — such as a conference session booking system which displayed a timetable grid on-screen but a chronological schedule on paper.

A print style sheet can be added to the HTML <head> after the standard style sheet:

<link rel="stylesheet" href="main.css" /> <link rel="stylesheet" media="print" href="print.css" />

The print.css styles will be applied in addition to screen styles when the page is printed.

To separate screen and print media, main.css should target the screen only:

<link rel="stylesheet" media="screen" href="main.css" /> <link rel="stylesheet" media="print" href="print.css" />

Alternatively, print styles can be included within an existing CSS file using @media rules. For example:

/* main.css */ body { margin: 2em; color: #fff; background-color: #000; } /* override styles when printing */ @media print { body { margin: 0; color: #000; background-color: #fff; } }

Any number of @media print rules can be added, so this may be practical for keeping associated styles together. Screen and print rules can also be separated if necessary:

/* main.css */ /* on-screen styles */ @media screen { body { margin: 2em; color: #fff; background-color: #000; } } /* print styles */ @media print { body { margin: 0; color: #000; background-color: #fff; } } Testing Printer Output

It's not necessary to kill trees and use outrageously expensive ink every time you want to test your print layout! The following options replicate print styles on-screen.

Print Preview

The most reliable option is the print preview option in your browser. This shows how page breaks will be handled using your default paper size.

Alternatively, you may be able to save or preview the page by exporting to a PDF.

Developer Tools

The DevTools (F12 or Cmd/Ctrl + Shift + I) can emulate print styles, although page breaks won't be shown.

In Chrome, open the Developer Tools and select More Tools, then Rendering from the three-dot icon menu at the top right. Change the Emulate CSS Media to print at the bottom of that panel.

In Firefox, open the Developer Tools and click the Toggle print media simulation icon on the Inspector tab's style pane:

Firefox print preview mode

Hack Your Media Attribute

Presuming you're using a <link> tag to load printer CSS, you could temporarily change the media attribute to screen:

<link rel="stylesheet" href="main.css" /> <link rel="stylesheet" media="screen" href="print.css" />

Again, this won't reveal page breaks. Don't forget to restore the attribute to media="print" once you finish testing.

Remove Unnecessary Sections

Before doing anything else, remove and collapse redundant content with display: none;. Typical unnecessary sections on paper could include navigation menus, hero images, headers, footers, forms, sidebars, social media widgets, and advertising blocks (usually anything in an iframe):

/* print.css */ header, footer, aside, nav, form, iframe, .menu, .hero, .adslot { display: none; }

It may be necessary to use display: none !important; if CSS or JavaScript functionality is showing elements according to particular UI states. Using !important isn't normally recommended, but we can justify its use in a basic set of printer styles which override screen defaults.

Linearize the Layout

It pains me to say this, but Flexbox and Grid rarely play nicely with printer layouts in any browser. If you encounter issues, consider using display: block; or similar on layout boxes and adjust dimensions as necessary. For example, set width: 100%; to span the full page width.

Printer Styling

Printer-friendly styling can now be applied. Recommendations:

ensure you use dark text on a white background consider using a serif font, which may be easier to read adjust the text size to 12pt or higher modify paddings and margins where necessary. Standard cm, mm, or in units may be more practical.

Further suggestions include …

The post How to Create Printer-friendly Pages with CSS appeared first on SitePoint.

How to Quickly and Easily Remove a Background in Photoshop

Dec 19, 2019

Description:

How to Quickly and Easily Remove a Background in Photoshop

This article on how to remove a background in Photoshop remains one of our most popular posts and was updated in 2019 for Adobe Photoshop 2020.

Photoshop offers many different techniques for removing a background from an image. For simple backgrounds, using the standard magic wand tool to select and delete the background may well be more than adequate. For more complicated backgrounds, you might use the Background Eraser tool.

The Background Eraser Tool

The Background Eraser tool samples the color at the center of the brush and then deletes pixels of a similar color as you "paint". The tool isn’t too difficult to get the hang of. Let me show you how it works.

Remove a Background, Step 1: Open your Image

Start by grabbing an image that you want to remove the background from. I'll be using the image below, as it features areas that range from easy removal through to more challenging spots. I snagged this one for free from Unsplash.

The example image: man standing against lattice background

Now let's open it in Photoshop.

The example image opened in Photoshop

Remove a Background, Step 2: Select Background Eraser

Select the Background Eraser tool from the Photoshop toolbox. It may be hidden beneath the Eraser tool. If it is, simply click and hold the Eraser tool to reveal it. Alternatively, you can press Shift + E to cycle through all the eraser tools to get to the Background Eraser. If you had the default Eraser tool selected, press Shift + E twice to select the Background Eraser Tool.

choosing the background eraser tool

Remove a Background, Step 3: Tune Your Tool Settings

On the tool options bar at the top of the screen, select a round, hard brush. The most appropriate brush size will vary depending on the image you're working on. Use the square bracket key ([ or ]) for quickly scaling your brush size.

selecting a brush

Alternatively, you can right-click your mouse anywhere on the artboard to change the size and hardness of your brush too.

alternative way to change brush size

Next, on the tool options bar, make sure Sampling is set to Continuous (it’s the first of the three icons), the Limits to Find Edges* and the *Tolerance has a range of 20-25%.

sampling, limits and tolerance

Note: a lower tolerance means the eraser will pick up on fewer color variations. While a higher tolerance expands the range of colors your eraser will select.

Remove a Background, Step 4: Begin Erasing

Bring your brush over your background and begin to erase. You should see a brush-sized circle with small crosshairs in the center. The crosshairs show the "hotspot" and delete that color wherever it appears inside the brush area. It also performs smart color extraction at the edges of any foreground objects to remove “color halos” that might otherwise be visible if the foreground object is overlaid onto another background.

beginning the process

When erasing, zoom up your work area and try to keep the crosshairs from overlapping on the edge of your foreground. It's likely that you’ll need to reduce the size of the brush in some places to ensure that you don't accidentally erase part of your foreground subject.

The post How to Quickly and Easily Remove a Background in Photoshop appeared first on SitePoint.

5 Signs It’s Time to Quit Your Job

Dec 16, 2019

Description:

"Jerry wouldn't let me go to the emergency room."

Jenny010137 recounted her story on Reddit. She had a major health crisis, but Jerry, her boss, wasn't buying it.

Jerry wouldn't let me go to the emergency room after the heavy vaginal bleeding I had been experiencing suddenly got way worse. I went over his head and got permission to go. I called my mom, told her to meet me in the ER. The ER nurse said he'd never seen so much blood. An ER nurse said this. It's determined I need a couple of blood transfusions and will be admitted.

Jenny's mom calls Jerry on her behalf.

My mom calls Jerry, who then proceeds to tell her that it's just stress, and I NEED TO GET BACK TO WORK. At this point, I couldn't even lift my own head up, but sure, I can take a bus across town and go back to work.

Doctors told Jenny they found a large growth that needed a biopsy.

They found a large growth that needed a biopsy. Jerry kept insisting that it couldn't be cancer because I'd be tired and losing weight. I had lost eight pounds in a week and went to bed the minute I got home. I was still recovering from the procedure when Jerry called me to let me know I was fired for taking too much time off. Five days later, I was diagnosed with cancer. Fuck you, Jerry. Fuck you.

Think about that for a second.

Jenny is losing blood rapidly. There's a good chance she's dying. Her boss can't be bothered to verify that she's okay. While she's in the hospital fighting for her life, he fires her for taking "too much time off."

This situation is obviously one to walk away from.

But it's not always so clear cut.

Sometimes you're in a situation where there are both positive and negative aspects of the job. With situations like these, the decision isn't always as obvious as we'd like it to be. Walk away from a promising position prematurely and you may burn bridges and destroy any goodwill you've built up.

What's the best way to know?

If you focus on the signs, you may be right, but too much uncertainty means you may handle things in a way that's less than ideal.

There's a better way.

Focus your attention on the right set of principles and you'll have the framework you need to decide when it's time to quit your job (or not). Let's take a look at these principles.

Principle #1: Your Job Violates Your Boundaries

Art Markman, professor of psychology at the University of Texas at Austin, shared a story relayed to him by a reader.

My mother suddenly passed away on a Friday evening. On the Sunday my boss showed up to my house with groceries and flowers and suggested that I go into the office on Monday for the quarterly meeting. After all, "this was a pivotal time" for the business.

I didn't go in the next day because of my overwhelming grief. I later found out that I was to receive an award on that Monday. Was this a career-limiting move, or is my boss not clear on boundaries?

This boss meant well, but his concern was self-serving and not at all in the best interests of his employee. What's worse, he may not have understood why it was a problem if his employee spoke to him about it later on.

This is why you need boundaries.

Boundaries act as gatekeepers in a variety of professional, emotional, social, physical and situations. Here's why you need boundaries and why they're so important:

They protect you from abusive or toxic behavior (for example, managers or co-workers making inappropriate demands, verbal abuse, inappropriate conversation, or details that are immoral or infringe on your values). Boundaries define how others can or should communicate with you. Good boundaries protect you from sacrificing your autonomy, freedom of choice, family, identity, integrity or contacts. Great boundaries attract more of the people, projects and opportunities you want. When set up appropriately, these boundaries repel the items you don't want.

How do you set great boundaries?

It's a simple process. First, determine what you do and don't want. Next, figure out what your employer wants or doesn't want.

Sounds simple, right?

Figuring out what you want is really about asking the right question (see above). Figuring out what your employer wants is really about identifying criteria that are documented in some way. That's important, because it gives you the leverage you need to protect yourself (legally) against any inappropriate behavior.

But setting boundaries is risky.

Consider this common idea: Tell your boss No and you could get fired (or worse). If developers are smart, they'll avoid biting the hand that feeds them.

This rationale is trash.

If you set a boundary, it will be tested. Those around you — your manager, co-workers, other developers — will attempt to back you into a corner. You're going to have to find appropriate ways to rise to the challenge and enforce your boundaries.

Why go to the trouble? Because boundaries limit the damage from the other four principles discussed in this article. If you don't have strong boundaries, you'll face the problems discussed here. It doesn't matter if you're employed or you own your own business.

If you have poor boundaries, you won't be able to achieve your goals.

Principle #2: Your Job Goes Against Your Goals

Reddit user YellowRoses had goals until their boss torpedoed those goals.

How do you deal with feeling disrespected by your boss? from r/careerguidance

They were promised a promotion. They negotiated with their boss and earned a verbal agreement regarding their promotion, only for said promotion to be denied with an "Oh, that's not happening now." No explanation or attempts at justifying the rescinded promise.

What if your employer isn't aware of your goals? Still doesn't matter. If you have a specific goal in mind, you're responsible for that goal. Not your co-workers, employer, or family members. Are you pushing for the director's position that's opened up? Prefer to stay in your current role but receive the same pay as managers? It's on you.

This seems obvious, until you realize most people wait to be chosen. They wait for someone to approve of their audition, accept them, recruit them, promote them, extend a helping hand, etc. Which goes nowhere fast.

To be clear, it's generally a good idea to discuss your goals with your employer, provided that you're in a good place to do so. If your employer laughs at you, mocks your goal, or decides they're unwilling to help you meet said goals, it's on you to make it happen.

The post 5 Signs It’s Time to Quit Your Job appeared first on SitePoint.

The Evolution of JavaScript Tooling: A Modern Developer’s Guide

Dec 12, 2019

Description:

The Evolution of JavaScript Tooling: A Modern Developer’s Guide

This article was created in partnership with Sencha. Thank you for supporting the partners who make SitePoint possible.

JavaScript application source code has traditionally been hard to understand, due to code being spread across JavaScript, HTML, and CSS files, as well as events and data flowing through a number of non intuitive paths. Like all software, the JavaScript development environment includes bundlers, package managers, version control systems, and test tools. Each of these requires some learning curve.

Inconsistencies and incompatibilities between browsers have historically required various tweaks and special cases to be sprinkled around the code, and very often fixing a bug in one browser breaks something on another browser. As a result, development teams struggle to create and maintain high quality, large-scale applications while the demand for what they do soars, especially at the enterprise-application level where business impact has replaced “How many lines of code have you laid down?”

To deal with this complexity, the open-source community as well as commercial companies have created various frameworks and libraries, but these frameworks and libraries have become ever more complicated as they add more and more features in an attempt to make it easier for the developer. Still, frameworks and libraries offer significant advantages to developers and can also organize and even reduce complexity.

This guide discusses some of the more popular frameworks and libraries that have been created to ease the burden of writing complex user interface (UI) code and how enterprise applications, especially data-intensive apps, can benefit from using these frameworks and UI components to deliver applications faster, with better quality, and yet stay within any development shop’s budget.

Complexity of Modern Web Development

Andrew S. Tanenbaum, the inventor of Minix (a precursor to Linux often used to bring up new computer chips and systems), once said1, “The nice thing about standards is that you have so many to choose from.” Browsers followed a number of standards, but not all of them, and many just went their own way.

That’s where the trouble started — the so-called “Browser Wars.” How each browser displayed the data from these websites could be quite different. Browser incompatibilities still exist today, and one could say they are a little worse because the Web has gone mobile.

Developing in today’s world means being as compatible as possible with as many of the popular web browsers as possible, including mobile and tablet.

What about mobile?

Learning Android Java (Android) can be difficult if the developer hasn’t been brought up with Java. For Apple iOS, Objective C is a mashup of the C programming language and Smalltalk, which is different but not entirely alien to C++ developers. (After all, object-oriented concepts are similar.) But given the coming of (Apple) Swift and a new paradigm, “protocol-oriented programming,” Objective C has a questionable future.

In contrast, the JavaScript world, through techniques such as React Native or Progressive Web Apps, allows for development of cross-platform apps that look like native apps and are performant. From a business perspective, an enterprise can gain a number of advantages by only using one tool set to build sophisticated web and mobile apps.

Constant change causes consternation

The JavaScript world is particularly rich in how much functionality and how many packages are available. The number is staggering. The number of key technologies that help developers create applications faster is also large, but the rate of change in this field causes what’s called “JavaScript churn,” or just churn. For example, when Angular moved from version 1 to 2 (and again from 3 to 4), the incompatibilities required serious porting time. Until we embrace emerging Web Components standards, not everything will interoperate with everything else.

One thing that can be said is that investing in old technologies not backed by standards can be career-limiting, thus the importance of ECMA and ECMAScript standards as well as adherence to more or less common design patterns (most programming is still, even to this day, maintenance of existing code rather than fresh new starts and architectures). Using commonly used design patterns like Model-View-Controller (MVC), Model-View-Viewmodel (MVVM), and Flux means that your code can be modified and maintained more easily than if you invent an entirely new paradigm.

Having large ecosystems and using popular, robust, well-supported tools is one strategy proven year after year to yield positive results for the company and the developer’s career, and having industry-common or industry-standard libraries means that you can find teammates to help with the development and testing. Modern development methodologies practically demand the use of frameworks, reusable libraries, and well-designed APIs and components.

Popularity of Modern Frameworks and Libraries

Stack Overflow, an incredibly popular developers website used for questions and answers (#57 according to Alexa as of January 2019), tracks a great deal of data on the popularity of various technologies and has become a go-to source for developers. Their most recent survey continued to show the incredible popularity of both JavaScript and JavaScript libraries and frameworks:

NPM Downloads of Popular Front-end LibrariesNPM Downloads of Popular Frontend Libraries. (Source)

According to Stack Overflow, based on the type of tags assigned to questions, the top eight most discussed topics on the site are JavaScript, Java, C#, PHP, Android, Python, jQuery and HTML — not C, C++, or more exotic languages like Ocaml or Haskell. If you’re building websites, you’re very likely going to want to use technologies that are popular because the number of open-source and commercial/supported products provides you with the ability to code and test more quickly, resulting in faster time to market.

What this means to developers is that the JavaScript world continues to lead all others in the number of developers, and while older technologies like jQuery are still popular, clearly React and Angular are important and continue growing. The newcomer, Vue, is also becoming more and more popular.

Selecting Angular, React, or Vue

Angular versus React versus Vue — there are so many open-source tools. Add to that libraries like Backbone.js and a hundred others. How can developers update their knowledge of so many? Which one should they choose? To some extent this decision is choosing text editors: it’s a personal choice, it’s fiercely defended, and in the end each might actually work for you.

If your main concern is popularity so you don’t get boxed into learning a complicated, rich programming environment only to see support wither away, then React is clearly “winning” as the long-term trend line shows. But popularity is only one attribute in a long shopping list of important decision factors.

Long-term trend lines of various popular frameworks and librariesLong-term trend lines of various popular frameworks and libraries. (Source)

The post The Evolution of JavaScript Tooling: A Modern Developer’s Guide appeared first on SitePoint.

Understanding and Using rem Units in CSS

Dec 12, 2019

Description:

CSS units have been the subject of several articles here on SitePoint (such as A Look at Length Units in CSS, The New CSS3 Relative Font Sizing Units, and The Power of em Units in CSS). In this article, we increase the count by having an in-depth look at rem units, which have excellent browser support and a polyfill if you need support for old IE.

This article was updated in December, 2019 to reflect the current state of rem unit sizing with CSS. For more on CSS font and text properties, read our book, CSS Master, 2nd Edition.

What Are rem Units?

You might have encountered the term “R.E.M.” before while listening to the radio or your music player. Unlike their musical counterparts, named for the “Rapid Eye Movement” during deep sleep, in CSS rem stands for “root em”. They won’t make you lose your religion nor believe in a man on the moon. What they can do is help you achieve a harmonious and balanced design.

According to the W3C spec the definition for one rem unit is:

Equal to the computed value of font-size on the root element. When specified on the font-size property of the root element, the rem units refer to the property’s initial value.

This means that 1rem equals the font size of the html element (which for most browsers has a default value of 16px).

Rem Units vs. Em Units

The main problem with em units is that they are relative to the font size of their own element. As such they can cascade and cause unexpected results. Let’s consider the following example, where we want lists to have a font size of 12px, in the case where the root font size is the default 16px:

[code language="css"]
html {
font-size: 100%;
}

ul {
font-size: 0.75em;
}
[/code]

If we have a list nested inside another list, the font size of the inner list will be 75% of the size of its parent (in this case 9px). We can still overcome this problem by using something along these lines:

[code language="css"]
ul ul {
font-size: 1em;
}
[/code]

This does the trick, however we still have to pay a lot of attention to situations where nesting gets even deeper.

With rem units, things are a simpler:

[code language="css"]
html {
font-size: 100%;
}

ul {
font-size: 0.75rem;
}
[/code]

As all the sizes are referenced from the root font size, there is no more need to cover the nesting cases in separate declarations.

Font Sizing with Rem Units

One of the pioneers of using rem units for font sizing is Jonathan Snook with his Font sizing with REM article, back in May, 2011. Like many other CSS developers, he had to face the problems that em units bring in complex layouts.

At that time, older versions of IE still had large market shares and they were unable to zoom text that was sized with pixels. However, as we saw earlier, it is very easy to lose track of nesting and get unexpected results with em units.

The main issue with using rem for font sizing is that the values are somewhat difficult to use. Let’s see an example of some common font sizes expressed in rem units, assuming, of course, that the base size is 16px:

10px = 0.625rem 12px = 0.75rem 14px = 0.875rem 16px = 1rem (base) 18px = 1.125rem 20px = 1.25rem 24px = 1.5rem 30px = 1.875rem 32px = 2rem

As we can see, these values are not very convenient for making calculations. For this reason, Snook used a trick called “62.5%“. It was not a new discovery, by any means, as it was already used with em units:

[code language="css"]
body { font-size:62.5%; } /* =10px */
h1 { font-size: 2.4em; } /* =24px */
p { font-size: 1.4em; } /* =14px */
li { font-size: 1.4em; } /* =14px? */
[/code]

As rem units are relative to the root element, Snook’s variant of the solution becomes:

[code language="css"]
html { font-size: 62.5%; } /* =10px */
body { font-size: 1.4rem; } /* =14px */
h1 { font-size: 2.4rem; } /* =24px */
[/code]

One also had to take into account the other browsers that didn’t support rem. Thus the code from above would have actually been written this way:

[code language="css"]
html {
font-size: 62.5%;
}

body {
font-size: 14px;
font-size: 1.4rem;
}

h1 {
font-size: 24px;
font-size: 2.4rem;
}
[/code]

While this solution seems to be close to the status of a “golden rule”, there are people who advise against using it blindingly. Harry Roberts writes his own take on the use of rem units. In his opinion, while the 62.5% solution makes calculation easier (as the font sizes in px are 10 times their rem values), it ends up forcing developers to explicitly rewrite all the font sizes in their website.

The post Understanding and Using rem Units in CSS appeared first on SitePoint.

How We Can Solve the Cryptocurrency Energy Usage Problem

Dec 10, 2019

Description:

Cryptocurrencies and Energy Usage: Problems and Solutions

Bitcoin is still the most important cryptocurrency people know about, and it serves as the entry point of the crypto space. However, every innovative project has to pay its price. For Bitcoin, it is its high carbon footprint created by mining.

Bitcoin mining works by solving cryptographic puzzles, also referred to Proof of Work (PoW). The miner that’s first to find the solution receives a Bitcoin reward. However, this race towards finding the solution comes with high energy usage, as it’s a resource-intensive process requiring a lot of electricity.

Currently, Bitcoin mining uses 58.93 TWh per year. An online tool by the University of Cambridge showed that Bitcoin uses as much energy as the whole of Switzerland. More important is the carbon footprint of Bitcoin. The electricity generated for powering the Bitcoin network equals 22 megatons of CO2 on a yearly basis. You can compare this carbon footprint with the footprint of a city like Kansas City (US).

This article will cover the following topics:

how the amount of energy consumed by each blockchain project differs depending on the implemented consensus algorithm possible solutions for the high energy usage of Bitcoin the effect of the Bitcoin network using a lot of excess and green energy.

To get started, let’s discuss if Bitcoin’s energy usage really is a problem?

Are We Thinking the Wrong Way about Bitcoin’s Energy Usage?

Let’s take a moment to think about where the energy for Bitcoin mining comes from. It’s worth questioning if the electricity the Bitcoin nodes use does harm the environment?

Many countries have an excess of electricity, especially when it comes to green energy solutions. The energy coming from green solutions like wind farms or solar plants is often hard to store or sell when the supply outweighs demand. This is true for many countries, especially China, which is responsible for 70 percent of the world’s Bitcoin mining.

As Bitcoin mining requires a lot of energy, node operators look for countries with cheap electricity prices. Reuters reported that “wasted [Chinese] wind power amounted to around 12 percent of total generation in 2017”. This means that node operators often end up in countries with an excess of energy. In those countries, Bitcoin mining plays an important role in neutralizing the energy market. Besides that, without Bitcoin mining, this excess of electricity is otherwise wasted.

Is it safe to say that Bitcoin does not contribute to environmental CO2 production? No, it does contribute for sure. However, the energy usage and CO2 pollution we think Bitcoin is responsible for is actually much lower.

Think about making a credit card payment. Every time you pull out your credit card to make a transaction, you also contribute to environmental pollution. You are not aware of the gigantic server plants of up to 100,000 square-feet to store and process all your transactions. Not to mention other things like their offices, payment terminals, or bank vaults.

It’s easy to attack Bitcoin for its energy usage. Therefore, it’s important to know that there is also an enormous hidden energy usage behind the VISA network. On the other side, the Bitcoin network only processes 100 million transactions per year, whereas the financial industry reaches up to 500 billion transactions per year.

The post How We Can Solve the Cryptocurrency Energy Usage Problem appeared first on SitePoint.

It’s Time to Start Making Your Web Apps Reactive

Dec 10, 2019

Description:

It's Time to Start Making Your Web Apps Reactive

This article was created in partnership with Manning Publications. Thank you for supporting the partners who make SitePoint possible.

You’ve heard of the principle of “survival of the fittest”, and you know that it’s especially true in web development. Your users expect split-second performance and bug-free interfaces — and if you can’t deliver them, you can be sure they’ll go straight to a competitor who can. But when it comes to survival, it’s important to remember the full principal of evolution: the best way to thrive is to be adaptable to change.

That’s where reactive programming comes in. Reactive applications are created to be adaptable to their environments by design. Right from the start, you’re building something made to react to load, react to failure, and react to your users. Whatever being deployed to production throws at your application, reactive programming will mean it can handle it.

How does reactive programming achieve this? It embeds sound programming principles into your application right from the very beginning.

Reactive Applications Are Message-driven

In reactive programming, data is pushed, not pulled. Rather than making requests of data that may or may not be available, client recipients await the arrival of messages with instructions only when data is ready. The designs of sender and recipient aren’t affected by how you propagate your messages, so you can design your system in isolation without needing to worry about how messages are transmitted. This also means that data recipients are only consuming resources when they’re active, rather than bogging down your application with requests for unavailable data.

Reactive Applications Are Elastic

Reactive applications are designed to elastically scale up or scale down, based on the amount of workload they have to deal with. A reactive system can both increase or decrease the resources it gives to its inputs, working without bottlenecks or contention points to more easily shard components and then distribute resources among them. Not only does this save you money on unused computing power, but even more importantly, it means that your application can easily service spikes in user activity.

Reactive Applications Are Responsive

Reactive applications must react to their users, and to their users' behavior. It’s essential that the system responds in a timely manner, not only for improved user experience, but so that problems can be quickly identified and (hopefully!) solved. With rapid response times and a consistent quality of service, you’ll find that your application has simpler error handling as well as much greater user confidence.

Reactive Applications Are Resilient

Reactive applications need to respond, adapt, and be flexible in the face of failure. Because a system can fail at any time, reactive applications are designed to boost resiliency through distribution. If there's a single point of failure, it’s just that — singular. The rest of your reactive application keeps running, because it’s been built to work without relying on any one part.

Further Resources

Reactive programming can be challenging to master. Fortunately, there’s lots of resources to help you out. Some of the best are the books and videos of Manning Publications, publishers of the highest quality tech books and videos you can buy today.

Exploring Modern Web Development is a 100% free guide to the most common tools for reactive programming. With this well-rounded sampler, you’ll have a firm foundation for developing awesome web apps with all the modern reactive features and functions today’s users expect.

SitePoint users can get 40% off top Manning reactive programming and web development books and videos with the coupon code NLSITEPOINT40. Check out popular bestsellers here.

The post It’s Time to Start Making Your Web Apps Reactive appeared first on SitePoint.

The Real Future of Remote Work is Asynchronous

Dec 5, 2019

Description:

I’ve been working remotely for over a decade – well before the days of tools like Slack or Zoom. In some ways, it was easier back then: you worked from wherever you were and had the space to manage your workload however you wanted. If you desired to go hardcore creative mode at night, sleep in, then leisurely read fiction over brunch, you could.

Now, in the age of the “green dot” or “presence prison,” as Jason Fried calls it, working remotely can be more suffocating than in-person work. The freedom that we worked hard to create — escaping the 9-to-5 — has now turned into constant monitoring, with the expectation that we are on, accessible, productive, and communicative 24/7.

I see this in job positions for remote roles. Companies frequently champion remote, proudly advertising their flexible cultures to only then list that candidates must be based within 60 minutes of Pacific Time Zone, that the hours are set, and standup is at 8:30am daily. One of the benefits of remote work is that it brings the world closer together and creates a level-playing field for the world’s best talent. Whether you were in Bengaluru or Berlin, you could still work with a VC-backed, cash-rich startup in San Francisco earning a solid hourly rate. If remote slowly turns into a way of working in real-time with frequent face-time, we will see less of this.

And let’s not forget trust: the crux of remote culture. Companies create tools that automatically record your screen at intervals to show management or clients you’re delivering. I founded a freelance marketplace called CloudPeeps and not recording your screen, as Upwork does, is one way we attract a different caliber of indie professional.

You can have more freedom in an office. From my beige cubicle at one of my first roles, I witnessed a colleague plan a wedding over the course of many months, including numerous calls to vendors and 20 tabs open for research. Most of the team was none the wiser – this wouldn’t be the case with remote today.

At the heart of this friction is the demand for real-time, synchronous communication. If we champion asynchronous as the heart of remote, what does the future of remote look like?

The post The Real Future of Remote Work is Asynchronous appeared first on SitePoint.

7 Ways Developers Can Contribute to Climate Action

Dec 4, 2019

Description:

7 Ways Developers Can Contribute to Climate Action

Whether you’ve just started out as a software engineer or you’ve been at it for decades, you too can play a role in helping to positively impact climate.

When people first consider this, they tend to think about the impact writing efficient code will have. Of course, you should always write efficient, elegant code. But unless the code you’re creating is going to be used by millions of people, it may not be where you can have the biggest impact from a climate perspective. (Code being used by millions or billions of people is probably highly optimized anyway!)

In this article, we'll look at seven other ways you can help.

Choose Where You Spend Your Career

Being an engineer means you have one of the most sought after, transferable occupations on the planet. In virtually any city in the world, you'll be in demand and probably well paid, so you have plenty of options. Choosing to work in a place that's at the intersection of your cares and your code is one of the easiest ways you can have an impact. Engineering is also one of the few careers where the job can be done remotely, and there's a growing list of companies focused on hiring people to work remotely.

Find Time to Contribute to Open-source Projects

Open source enables us all to benefit from a collective effort and shared knowledge, so the benefits are already clear. But what you may not be aware of is the mass of open-source projects specifically targeted at helping the environment. Open source also powers some of the biggest sites on the Internet, so you may also find your code being used at that billions-of-people scale mentioned earlier. While it's easy to find projects you can work on via a quick Google search, this article highlights a few.

Apply Your Skills to Non-profits

A lot of the work being done to combat or deal with the impacts of climate change are being done by the non-profit sector, and the one thing the non-profit sector always has is a lack of capital and a lack of talent. When people think of volunteering, they tend to think of painting a shed or handing out food at a shelter, but you can potentially create a bigger and more lasting impact by applying your skills and experience.

I worked with a non-profit to help design, set up and configure Salesforce's (free for nonprofits) service, so they could run more efficiently and at a higher scale. Hour for hour this was the best way I could help them to have a bigger impact.

Influence the Way the Product is Designed

With the rise of agile, squads (pioneered by Spotify) and cross-functional teams generally, the dynamic within the team has changed. Engineers now have a seat at the table to drive what the software does, how it works and even the end-customer problems it solves. This means as an engineer you can either walk into the room and be told what is being built or you can stand up and help drive that outcome, by considering the climate change impact of a design decision. A great example of this might be to set default shipping options to a lower impact option in an eCommerce site, or Google maps defaulting to a walking option vs a driving option.

The post 7 Ways Developers Can Contribute to Climate Action appeared first on SitePoint.

How to Divert Traffic Using IP2Location in a Next.js Website

Dec 4, 2019

Description:

How to Divert Traffic Using IP2Location in a Next.js Website

This article was created in partnership with IP2Location. Thank you for supporting the partners who make SitePoint possible.

In a world where online commerce has become the norm, we need to build websites that are faster, user friendly and more secure than ever. In this article, you’ll learn how to set up a Node.js powered website that’s capable of directing traffic to relevant landing pages based on a visitor's country. You'll also learn how to block anonymous traffic (e.g. Tor) in order to eliminate risks coming from such networks.

In order to implement these features, we'll be using the IP2Proxy web service provided by IP2Location, a Geo IP solutions provider. The web service is a REST API that accepts an IP address and responds with geolocation data in JSON format.

ip2location website

Here are some of the fields that we'll receive:

countryName cityName isProxy proxyType etc.

We'll use Next.js to build a website containing the following landing pages:

Home Page: API fetching and redirection will trigger from this page Landing Page: supported countries will see the product page in their local currency Unavailable Page: other countries will see this page with an option to join a waiting list Abuse Page: visitors using Tor networks will be taken to this page

Now that you're aware of the project plan, let's see what you need to get started.

Prerequisites

On your machine, I would highly recommend the following:

Latest LTS version of Node.js (v12) Yarn

An older version of Node.js will do, but the most recent LTS (long-term support) version contains performance and debugging improvements in the area of async code, which we'll be dealing with. Yarn isn't necessary, but you'll benefit from its faster performance if you use it.

I’m also going to assume you have a good foundation in:

React React Hooks

As mentioned earlier, we'll be using Next.js to build our website. If you're new to it, you can follow their official interactive tutorial to quickly get up to speed.

IP2Location + Next.js Project Walkthrough Project Setup

To set up the project, simply launch the terminal and navigate to your workspace. Execute the following command:

npx create-next-app

Feel free to give your app any name. I've called mine next-ip2location-example. After installation is complete, navigate to the project's root and execute yarn dev. This will launch the Node.js dev server. If you open your browser and navigate to localhost:3000, you should see a page with the header “Welcome to Next.js”. This should confirm that we have a working app that runs without errors. Stop the app and install the following dependencies:

yarn add yarn add next-compose-plugins dotenv-load next-env @zeit/next-css bulma isomorphic-unfetch

We'll be using Bulma CSS framework to add out-of-the-box styling for our site. Since we'll be connecting to an API service, we'll set up an .env file to store our API key. Do note that this file should not be stored in a repository. Next create the file next.config.js. at the root of the project and add the following code:

const withPlugins = require('next-compose-plugins') const css = require('@zeit/next-css') const nextEnv = require('next-env') const dotenvLoad = require('dotenv-load') dotenvLoad() module.exports = withPlugins([ nextEnv(), [css] ])

The above configuration allows our application to read the .env file and load values. Do note that the keys will need to have the prefix NEXT_SERVER_ in order to be loaded in the server environment. Visit the next-env package page for more information. We'll set the API key in the next section. The above configuration also gives our Next.js app the capability to pre-process CSS code via the zeit/next-css package. This will allow us to use Bulma CSS framework in our application. Do note we'll need import Bulma CSS code into our Next.js application. I'll soon show you where to do this.

Obtaining API Key for the I2Proxy Web Service

As mentioned earlier, we'll need to convert a visitor's IP address into information we can use to redirect or block traffic. Simply head to the following link and sign up for a free trial key:

IP2Proxy Detection Web Service

ip2proxy trial key packages

Once you sign up, you'll receive the free API key via email. Create an .env file and place it at the root of your project folder. Copy your API key to the file as follows:

NEXT_SERVER_IP2PROXY_API=<place API key here>

This free key will give you 1,000 free credits. At a minimum, we'll need the following fields for our application to function:

countryName proxyType

If you look at the pricing section on the IP2Proxy page, you'll note that the PX2 package will give us the required response. This means each query will costs us two credits. Below is a sample of how the URL should be constructed:

http://api.ip2proxy.com/?ip=8.8.8.8&key=demo&package=PX2

You can also submit the URL query without the IP. The service will use the IP address of the machine that sent the request. We can also use the PX8 package to get all the available fields such as isp and domain in the top-most package of the IP2Proxy Detection Web Service.

http://api.ip2proxy.com/?key=demo&package=PX8

In the next section, we'll build a simple state management system for storing the proxy data which will be shared among all site pages.

Building Context API in Next.js

Create the file context/proxy-context and insert the following code:

import React, { useState, useEffect, useRef, createContext } from 'react' export const ProxyContext = createContext() export const ProxyContextProvider = (props) => { const initialState = { ipAddress: '0.0.0.0', countryName: 'Nowhere', isProxy: false, proxyType: '' } // Declare shareable proxy state const [proxy, setProxy] = useState(initialState) const prev = useRef() // Read and Write Proxy State to Local Storage useEffect(() => { if (proxy.countryName == 'Nowhere') { const localState = JSON.parse(localStorage.getItem('ip2proxy')) if (localState) { console.info('reading local storage') prev.current = localState.ipAddress setProxy(localState) } } else if (prev.current !== proxy.ipAddress) { console.info('writing local storage') localStorage.setItem('ip2proxy', JSON.stringify(proxy)) } }, [proxy]) return( <ProxyContext.Provider value={[ipLocation, setProxy]}> {props.children} </ProxyContext.Provider> ) }

Basically, we’re declaring a sharable state called proxy that will store data retrieved from the IP2Proxy web service. The API fetch query will be implemented in pages/index.js. The information will be used to redirect visitors to the relevant pages. If the visitor tries to refresh the page, the saved state will be lost. To prevent this from happening, we're going to use the useEffect() hook to persist state in the browser's local storage. When a user refreshes a particular landing page, the proxy state will be retrieved from the local storage, so there's no need to perform the query again. Here's a quick sneak peek of Chrome's local storage in action:

chrome local storage

Tip: In case you run into problems further down this tutorial, clearing local storage can help resolve some issues.

Displaying Proxy Information

Create the file components/proxy-view.js and add the following code:

import React, { useContext } from 'react' import { ProxyContext } from '../context/proxy-context' const style = { padding: 12 } const ProxyView = () => { const [proxy] = useContext(ProxyContext) const { ipAddress, countryName, isProxy, proxyType } = proxy return ( <div className="box center" style={style}> <div className="content"> <ul> <li>IP Address : {ipAddress} </li> <li>Country : {countryName} </li> <li>Proxy : {isProxy} </li> <li>Proxy Type: {proxyType} </li> </ul> </div> </div> ) } export default ProxyView

This is simply a display component that we'll place at the end of each page. We're only creating this to confirm that our fetch logic and application's state is working as expected. You should note that the line const [proxy] = useContext(ProxyContext) won't run until we've declared our Context Provider at the root of our application. Let's do that now in the next section.

Implementing Context API Provider in Next.js App

Create the file pages/_app.js and add the following code:

import React from 'react' import App from 'next/app' import 'bulma/css/bulma.css' import { ProxyContextProvider } from '../context/proxy-context' export default class MyApp extends App { render() { const { Component, pageProps } = this.props return ( <ProxyContextProvider> <Component {...pageProps} /> </ProxyContextProvider> ) } }

The _app.js file is the root component of our Next.js application where we can share global state with the rest of the site pages and child components. Note that this is also where we're importing CSS for the Bulma framework we installed earlier. With that set up, let's now build a layout that we'll use for all our site pages.

The post How to Divert Traffic Using IP2Location in a Next.js Website appeared first on SitePoint.

10 Zsh Tips & Tricks: Configuration, Customization & Usage

Dec 3, 2019

Description:

As web developers, the command line is becoming an ever more important part of our workflow. We use it to install packages from npm, to test API endpoints, to push commits to GitHub, and lots more besides.

My shell of choice is zsh. It is a highly customizable Unix shell, that packs some very powerful features such as killer tab completion, clever history, remote file expansion, and much more.

In this article I'll show you how to install zsh, then offer ten tips and tricks to make you more productive when working with it.

This is a beginner-level guide which can be followed by anybody (even Windows users, thanks to Windows Subsystem for Linux). However, in light of Apple's announcement that zsh is now the standard shell on macOS Catalina, mac users might find it especially helpful.

Let's get started.

Installation

I don't want to offer in-depth installation instructions for each operating system, rather some general guidelines instead. If you get stuck installing zsh, there is plenty of help available online.

At the time of writing the current zsh version is 5.7.1.

macOS

Most versions of macOS ship with zsh pre-installed. You can check if this is the case and if so, which version you are running using the command: zsh --version. If the version is 4.3.9 or higher, you should be good to go (we'll need at least this version to install Oh My Zsh later on). If not, you can follow this guide to install a more recent version of zsh using homebrew.

Once installed, you can set zsh as the default shell using: chsh -s $(which zsh). After issuing this command, you'll need to log out, then log back in again for the changes to take effect.

If at any point you decide you don't like zsh, you can revert to Bash using: chsh -s $(which bash).

Linux

On Ubuntu-based distros, you can install zsh using: sudo apt-get install zsh. Once the installation completes, you can check the version using zsh --version, then make zsh your default shell using chsh -s $(which zsh). You'll need to log out, then log back in for the changes to take effect.

As with macOS, you can revert back to Bash using: chsh -s $(which bash).

If you are running a non-Ubuntu based distro, then check out the instructions for other distros.

Windows

Unfortunately, this is where things start to get a little complicated. Zsh is a Unix shell and for it to work on Windows, you'll need to activate Windows Subsystem for Linux (WSL), an environment in Windows 10 for running Linux binaries.

There are various tutorials online explaining how to get up and running with zsh in Window 10s. I found these two to be up-to-date and easy to follow:

How to Install and Use the Linux Bash Shell on Windows 10 - follow this one first to install WSL How to Use Zsh (or Another Shell) in Windows 10 - follow this one second to install zsh

Note that it is also possible to get zsh running with Cygwin. Here are instructions for doing that.

First Run

When you first open zsh, you'll be greeted by the following menu.

The post 10 Zsh Tips & Tricks: Configuration, Customization & Usage appeared first on SitePoint.

Building a Habit Tracker with Prisma 2, Chakra UI, and React

Dec 2, 2019

Description:

Building a Habit Tracker with Prisma, Chakra UI, and React

In June 2019, Prisma 2 Preview was released. Prisma 1 changed the way we interact with databases. We could access databases through plain JavaScript methods and objects without having to write the query in the database language itself. Prisma 1 acted as an abstraction in front of the database so it was easier to make CRUD (create, read, update and delete) applications.

Prisma 1 architecture looked like this:

Prisma 1 architecture

Notice that there’s an additional Prisma server required for the back end to access the database. The latest version doesn’t require an additional server. It's called The Prisma Framework (formerly known as Prisma 2) which is a complete rewrite of Prisma. The original Prisma was written in Scala, so it had to be run through JVM and needed an additional server to run. It also had memory issues.

The Prisma Framework is written in Rust so the memory footprint is low. Also, the additional server required while using Prisma 1 is now bundled with the back end, so you can use it just like a library.

The Prisma Framework consists of three standalone tools:

Photon: a type-safe and auto-generated database client ("ORM replacement") Lift: a declarative migration system with custom workflows Studio: a database IDE that provides an Admin UI to support various database workflows.

Prisma 2 architecture

Photon is a type-safe database client that replaces traditional ORMs, and Lift allows us to create data models declaratively and perform database migrations. Studio allows us to perform database operations through a beautiful Admin UI.

Why use Prisma?

Prisma removes the complexity of writing complex database queries and simplifies database access in the application. By using Prisma, you can change the underlying databases without having to change each and every query. It just works. Currently, it only supports mySQL, SQLite and PostgreSQL.

Prisma provides type-safe database access provided by an auto-generated Prisma client. It has a simple and powerful API for working with relational data and transactions. It allows visual data management with Prisma Studio.

Providing end-to-end type-safety means developers can have confidence in their code, thanks to static analysis and compile-time error checks. The developer experience increases drastically when having clearly defined data types. Type definitions are the foundation for IDE features — like intelligent auto-completion or jump-to-definition.

Prisma unifies access to multiple databases at once (coming soon) and therefore drastically reduces complexity in cross-database workflows (coming soon).

It provides automatic database migrations (optional) through Lift, based on a declarative datamodel expressed using GraphQL's schema definition language (SDL).

Prerequisites

For this tutorial, you need a basic knowledge of React. You also need to understand React Hooks.

Since this tutorial is primarily focused on Prisma, it’s assumed that you already have a working knowledge of React and its basic concepts.

If you don’t have a working knowledge of the above content, don't worry. There are tons of tutorials available that will prepare you for following this post.

Throughout the course of this tutorial, we’ll be using yarn. If you don’t have yarn already installed, install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node v12.11.1 npm v6.11.3 npx v6.11.3 yarn v1.19.1 prisma2 v2.0.0-preview016.2 react v16.11.0 Folder Structure

Our folder structure will be as follows:

streaks-app/ client/ server/

The client/ folder will be bootstrapped from create-react-app while the server/ folder will be bootstrapped from prisma2 CLI.

So you just need to create a root folder called streaks-app/ and the subfolders will be generated while scaffolding it with the respective CLIs. Go ahead and create the streaks-app/ folder and cd into it as follows:

$ mkdir streaks-app && cd $_ The Back End (Server Side) Bootstrap a new Prisma 2 project

You can bootstrap a new Prisma 2 project by using the npx command as follows:

$ npx prisma2 init server

Alternatively, you can install prisma2 CLI globally and run the init command. The do the following:

$ yarn global add prisma2 // or npm install --global prisma2 $ prisma2 init server Run the interactive prisma2 init flow & select boilerplate

Select the following in the interactive prompts:

Select Starter Kit Select JavaScript Select GraphQL API Select SQLite

Once terminated, the init command will have created an initial project setup in the server/ folder.

Now open the schema.prisma file and replace it with the following:

generator photon { provider = "photonjs" } datasource db { provider = "sqlite" url = "file:dev.db" } model Habit { id String @default(cuid()) @id name String @unique streak Int }

schema.prisma contains the data model as well as the configuration options.

Here, we specify that we want to connect to the SQLite datasource called dev.db as well as target code generators like photonjs generator.

Then we define the data model Habit, which consists of id, name and streak.

id is a primary key of type String with a default value of cuid().

name is of type String, but with a constraint that it must be unique.

streak is of type Int.

The seed.js file should look like this:

const { Photon } = require('@generated/photon') const photon = new Photon() async function main() { const workout = await photon.habits.create({ data: { name: 'Workout', streak: 49, }, }) const running = await photon.habits.create({ data: { name: 'Running', streak: 245, }, }) const cycling = await photon.habits.create({ data: { name: 'Cycling', streak: 77, }, }) const meditation = await photon.habits.create({ data: { name: 'Meditation', streak: 60, }, }) console.log({ workout, running, cycling, meditation, }) } main() .catch(e => console.error(e)) .finally(async () => { await photon.disconnect() })

This file creates all kinds of new habits and adds it to the SQLite database.

Now go inside the src/index.js file and remove its contents. We'll start adding content from scratch.

First go ahead and import the necessary packages and declare some constants:

const { GraphQLServer } = require('graphql-yoga') const { makeSchema, objectType, queryType, mutationType, idArg, stringArg, } = require('nexus') const { Photon } = require('@generated/photon') const { nexusPrismaPlugin } = require('nexus-prisma')

Now let’s declare our Habit model just below it:

const Habit = objectType({ name: 'Habit', definition(t) { t.model.id() t.model.name() t.model.streak() }, })

We make use of objectType from the nexus package to declare Habit.

The name parameter should be the same as defined in the schema.prisma file.

The definition function lets you expose a particular set of fields wherever Habit is referenced. Here, we expose id, name and streak field.

If we expose only the id and name fields, only those two will get exposed wherever Habit is referenced.

Below that, paste the Query constant:

const Query = queryType({ definition(t) { t.crud.habit() t.crud.habits() // t.list.field('habits', { // type: 'Habit', // resolve: (_, _args, ctx) => { // return ctx.photon.habits.findMany() // }, // }) }, })

We make use of queryType from the nexus package to declare Query.

The Photon generator generates an API that exposes CRUD functions on the Habit model. This is what allows us to expose t.crud.habit() and t.crud.habits() method.

t.crud.habit() allows us to query any individual habit by its id or by its name. t.crud.habits() simply returns all the habits.

Alternatively, t.crud.habits() can also be written as:

t.list.field('habits', { type: 'Habit', resolve: (_, _args, ctx) => { return ctx.photon.habits.findMany() }, })

Both the above code and t.crud.habits() will give the same results.

In the above code, we make a field named habits. The return type is Habit. We then call ctx.photon.habits.findMany() to get all the habits from our SQLite database.

Note that the name of the habits property is auto-generated using the pluralize package. It's therefore recommended practice to name our models singular — that is, Habit and not Habits.

We use the findMany method on habits, which returns a list of objects. We find all the habits as we have mentioned no condition inside of findMany. You can learn more about how to add conditions inside of findMany here.

Below Query, paste Mutation as follows:

const Mutation = mutationType({ definition(t) { t.crud.createOneHabit({ alias: 'createHabit' }) t.crud.deleteOneHabit({ alias: 'deleteHabit' }) t.field('incrementStreak', { type: 'Habit', args: { name: stringArg(), }, resolve: async (_, { name }, ctx) => { const habit = await ctx.photon.habits.findOne({ where: { name, }, }) return ctx.photon.habits.update({ data: { streak: habit.streak + 1, }, where: { name, }, }) }, }) }, })

Mutation uses mutationType from the nexus package.

The CRUD API here exposes createOneHabit and deleteOneHabit.

createOneHabit, as the name suggests, creates a habit whereas deleteOneHabit deletes a habit.

createOneHabit is aliased as createHabit, so while calling the mutation we call createHabit rather than calling createOneHabit.

Similarly, we call deleteHabit instead of deleteOneHabit.

Finally, we create a field named incrementStreak, which increments the streak of a habit. The return type is Habit. It takes an argument name as specified in the args field of type String. This argument is received in the resolve function as the second argument. We find the habit by calling ctx.photon.habits.findOne() while passing in the name parameter in the where clause. We need this to get our current streak. Then finally we update the habit by incrementing the streak by 1.

Below Mutation, paste the following:

const photon = new Photon() new GraphQLServer({ schema: makeSchema({ types: [Query, Mutation, Habit], plugins: [nexusPrismaPlugin()], }), context: { photon }, }).start(() => console.log( `🚀 Server ready at: http://localhost:4000\n⭐️ See sample queries: http://pris.ly/e/js/graphql#5-using-the-graphql-api`, ), ) module.exports = { Habit }

We use the makeSchema method from the nexus package to combine our model Habit, and add Query and Mutation to the types array. We also add nexusPrismaPlugin to our plugins array. Finally, we start our server at localhost:4000. Port 4000 is the default port for graphql-yoga. You can change the port as suggested here.

Let's start the server now. But first, we need to make sure our latest schema changes are written to the node_modules/@generated/photon directory. This happens when you run prisma2 generate.

If you haven't installed prisma2 globally, you'll have to replace prisma2 generate with ./node_modules/.bin/prisma2 generate. Then we need to migrate our database to create tables.

The post Building a Habit Tracker with Prisma 2, Chakra UI, and React appeared first on SitePoint.

Black Friday 2019 for Designers and Developers

Nov 29, 2019

Description:

Black Friday deals for designers and developers 2019

This article was created in partnership with Mekanism. Thank you for supporting the partners who make SitePoint possible.

Black Friday is one of the best opportunities of the year to get all kinds of new stuff, including digital web tools and services. Some companies are offering huge discounts to heavily increase their sales, while others already have excellent offers for their customers and partners.

In this article, you’ll find free and premium web tools and services, and also some of the best Black Friday WordPress deals. We included website builders, UI Kits, Admins themes, WordPress themes, effective logo and brand identity creators, and much more. There’s a web tool or service for everyone in this showcase of 38 excellent solutions.

Let’s start.

1. Free and Premium Bootstrap 4 Admin Themes and UI Kits

Dashboardpack

DashboardPack is one of the main suppliers of free and premium Bootstrap 4 admin themes and UI kits, being used by tens of thousands of people with great success. Here you’ll find free and premium themes, made with great attention to detail — HTML5 themes, React themes, Angular themes, and Vue themes.

On the DashboardPack website there’s a dedicated section of Freebies. Here there are four gorgeous dashboard themes (HTML, Angular, Vue, and React) that you can see as a live demo and use for free.
Between November 29 and December 3, you have 50% discount for all templates and all license types (Personal, Developer, and Lifetime). Use this coupon code: MADBF50.

2. Total Theme

Total Theme

Total Theme is a super powerful and complete WordPress theme that is flexible, easy to use and customize. It has brilliant designs included, and other cool stuff.

With over 38k happy users, Total Theme is a popular WordPress theme. It comes loaded with over 80 builder modules, over 40 premade demos that can be installed with 1-click, 500 styling options, and a friendly and lightning-fast interface.

The premade demos cover niches like Business, One Page, Portfolio, Personal, Creative, Shop, Blog, Photography, and more. Total Theme will help you achieve pretty much any goal — from scratch using the included Visual Page Builder, or by editing a demo design.

A limited-time 50% off Total Theme offer is valid from November 26 2019 (12pm AEDT) through December 3 2019 (8pm AEDT). Discount already applied.

3. Tailor Brands

Tailor Brands

Imagine if your dream business idea had a name, a face, and branded documents that made it official. With Tailor Brands’ online logo maker and design tools, you can instantly turn that dream idea into a living, breathing company! Design a logo in 30 seconds, customize it to your liking, and put it on everything — from professional business cards to online presentations.

Tailor Brand’s mission is to be the biggest branding agency powered by AI. It’s a huge goal but it is achievable, and they already have a top position on this ladder.

Designing a logo with Tailor Brands is super simple and you don’t need any special skills or previous experience to get a top logo design. You write the logo name you like, add a tagline (optional step), indicate which industry is your logo is for, choose if you want an icon-, name- or initial-based logo, choose from left and right (you’ll get designs as examples), and the powerful AI will present you plenty of logo designs to choose from. It’s super simple and straightforward.

Go ahead and design a logo with Tailor Brands.

4. Freelance Taxes

Bonsai Freelance Taxes

Bonsai is the integrated suite of products used by the world’s best creative freelancers.

With the latest addition of freelance taxes to the product lineup, Bonsai is more prepared than ever to help with everything your freelance business needs.

Be prepared for tax season and spend just seconds getting an overview of what you owe in annual or quarterly taxes.

Bonsai’s freelance tax software looks at your expenses, automatically categorizes them, and highlights which are deductible and to what percentage.

All Bonsai products are deeply integrated with each other to ensure it can fit every work style. Other features you should know about include contracts, proposals, time-tracking, and invoicing.

Start your free trial of Bonsai today and be ready for your freelance taxes ahead of time!

5. Codester

Codester

Codester is a huge marketplace where web designers and developers can find thousands of premium scripts, codes, app templates, themes (of all kinds), plugins, graphics, and much more. Always check the Flash Sale section where hugely discounted items are being sold.

6. Mobile App Testing

TestingBot

With over eight years of experience, this App and Browser Testing service is powerful, easy to use and provides you with a big number of features tailored to help you improve your product. Use TestingBot for automated web and app testing, for live web and app testing, for visual testing, and much more.

Start a free, 14-day trial, no credit card required.

7. FunctionFox

FunctionFox

The leading choice for creative professionals, FunctionFox gives you simple yet powerful time-tracking and project-management tools that allow you to keep multiple projects on track, forecast workloads, reduce communication breakdowns and stay on top of deadlines through project scheduling, task-based assignments, internal communication tools and comprehensive reporting. Don't let deadlines and due dates slip past!

Try a free demo today at FunctionFox.

8. Taskade: Simple Tasks, Notes, Chat

Taskade

Taskade is a unified workspace where you can chat, write, and get work done with your team. Edit projects in real time. Chat and video conference on the same page. Keep track of tasks across multiple teams and workspaces. Plan, manage, and visualize projects. And much more.

With Taskade, you can build your own workspace templates. You can start from a blank page or you can choose between a Weekly Planner, Meeting Agenda, Project Board, Mindmap, and more (you'll find lots of templates to start with). Everything you need can be fully configured to be a perfect fit.

9. Live Chat Software

Live Chat Software

AppyPie is a professional and super-easy-to-use Live Chat solution that will help you reach out to your clients and offer them real-time responsive and support through your website and mobile, using the platform live chat software.

This is a brilliant way to quickly increase conversions, make more sales (you can answer questions from people that want to buy), and increase the level of happiness of your customers. (Whatever problem they may have, they know that you're there to help fast.)

Request an invite to test the platform.

10. Mobirise Website Builder

Mobirise

Mobirise is arguably the best website builder in 2019, which you can use to create fast, responsive, and Google-friendly websites in minutes, with zero coding, and only drag-and-drop.

This brilliant builder is loaded with over 2,000 awesome website templates to start with, with eCommerce and Shopping Cart, sliders, galleries, forms, popups, icons, and much more.

In this period there is a 94% discount, so take it.

11. Newsletter Templates

Newsletter Templates

MailMunch is a powerful drag-and-drop builder that's loaded with tons of beautiful, pre-designed newsletter templates, with advanced features like Template Blocks and a Media Library to make the workflow even smoother, and a lot more. There's no coding required to use MailMunch.

Start boosting your conversions with MailMunch.

12. Astra Theme: Elementor Templates

Astra

Elementor is the most powerful website builder on the market, being used by millions of people with great success. To get out of the crowd, you can supercharge Elementor with 100+ free and premium templates, by using this bundle.

Free to use.

13. Schema Pro

Schema Pro

Creating a schema markup is no longer a task! With a simple click and select interface you can set up a markup in minutes. All the markup configurations you will set are automatically applied to all selected pages and posts.

Get Schema Pro and outperform your competitors in search engines.

14. Rank Math SEO

Rank Math SEO

Rank Math is the most powerful and easy-to-use WordPress SEO plugin on the market, making your website rank higher in search engines in no time. After a quick installation and setup, Rank Math SEO does the whole the job with no supervision.

The post Black Friday 2019 for Designers and Developers appeared first on SitePoint.

Delay, Sleep, Pause, & Wait in JavaScript

Nov 28, 2019

Description:

Timing Issues in JavaScript: Implementing a Sleep Function

Many programming languages have a sleep function that will delay a program's execution for a given number of seconds. This functionality is absent from JavaScript, however, owing to its asynchronous nature. In this article, we'll look briefly at why this might be, then how we can implement a sleep function ourselves.

Understanding JavaScript's Execution Model

Before we get going, it's important to make sure we understand JavaScript's execution model correctly.

Consider the following Ruby code:

require 'net/http' require 'json' url = 'https://api.github.com/users/jameshibbard' uri = URI(url) response = JSON.parse(Net::HTTP.get(uri)) puts response['public_repos'] puts "Hello!"

As one might expect, this code makes a request to the GitHub API to fetch my user data. It then parses the response, outputs the number of public repos attributed to my GitHub account and finally prints "Hello!" to the screen. Execution goes from top to bottom.

Contrast that with the equivalent JavaScript version:

fetch('https://api.github.com/users/jameshibbard') .then(res => res.json()) .then(json => console.log(json.public_repos)); console.log("Hello!");

If you run this code, it will output "Hello!" to the screen, then the number of public repos attributed to my GitHub account.

This is because fetching data from an API is an asynchronous operation in JavaScript. The JavaScript interpreter will encounter the fetch command and dispatch the request. It will not, however, wait for the request to complete. Rather, it will continue on its way, output "Hello!" to the console, then when the request returns a couple of hundred milliseconds later, it will output the number of repos.

If any of this is news to you, you should watch this excellent conference talk: What the heck is the event loop anyway?.

You Might Not Actually Need a Sleep Function

Now that we have a better understanding of JavaScript's execution model, let's have a look at how JavaScript handles delays and asynchronous operations.

Create a Simple Delay Using setTimeout

The standard way of creating a delay in JavaScript is to use its setTimeout method. For example:

console.log("Hello"); setTimeout(() => { console.log("World!"); }, 2000);

This would log "Hello" to the console, then after two seconds "World!" And in many cases, this is enough: do something, wait, then do something else. Sorted!

However, please be aware that setTimeout is an asynchronous method. Try altering the previous code like so:

console.log("Hello"); setTimeout(() => { console.log("World!"); }, 2000); console.log("Goodbye!");

It will log:

Hello Goodbye! World! Waiting for Things with setTimeout

It's also possible to use setTimeout (or its cousin setInterval) to keep JavaScript waiting until a condition is met. For example, here's how you might use setTimeout to wait for a certain element to appear on a web page:

function pollDOM () { const el = document.querySelector('my-element'); if (el.length) { // Do something with el } else { setTimeout(pollDOM, 300); // try again in 300 milliseconds } } pollDOM();

This assumes the element will turn up at some point. If you're not sure that's the case, you'll need to look at canceling the timer (using clearTimeout or clearInterval).

If you'd like to find out more about JavaScript's setTimeout method, please consult our tutorial which has plenty of examples to get you going.

The post Delay, Sleep, Pause, & Wait in JavaScript appeared first on SitePoint.

Understanding module.exports and exports in Node.js

Nov 27, 2019

Description:

Working with Modules in Node.js

In programming, modules are self-contained units of functionality that can be shared and reused across projects. They make our lives as developers easier, as we can use them to augment our applications with functionality that we haven't had to write ourselves. They also allow us to organize and decouple our code, leading to applications that are easier to understand, debug and maintain.

In this article, I'll examine how to work with modules in Node.js, focusing on how to export and consume them.

Different Module Formats

As JavaScript originally had no concept of modules, a variety of competing formats have emerged over time. Here's a list of the main ones to be aware of:

The Asynchronous Module Definition (AMD) format is used in browsers and uses a define function to define modules. The CommonJS (CJS) format is used in Node.js and uses require and module.exports to define dependencies and modules. The npm ecosystem is built upon this format. The ES Module (ESM) format. As of ES6 (ES2015), JavaScript supports a native module format. It uses an export keyword to export a module's public API and an import keyword to import it. The System.register format was designed to support ES6 modules within ES5. The Universal Module Definition (UMD) format can be used both in the browser and in Node.js. It's useful when a module needs to be imported by a number of different module loaders.

Please be aware that this article deals solely with the CommonJS format, the standard in Node.js. If you'd like to read into any of the other formats, I recommend this article, by SitePoint author Jurgen Van de Moere.

Requiring a Module

Node.js comes with a set of built-in modules that we can use in our code without having to install them. To do this, we need to require the module using the require keyword and assign the result to a variable. This can then be used to invoke any methods the module exposes.

For example, to list out the contents of a directory, you can use the file system module and its readdir method:

const fs = require('fs'); const folderPath = '/home/jim/Desktop/'; fs.readdir(folderPath, (err, files) => { files.forEach(file => { console.log(file); }); });

Note that in CommonJS, modules are loaded synchronously and processed in the order they occur.

Creating and Exporting a Module

Now let's look at how to create our own module and export it for use elsewhere in our program. Start off by creating a user.js file and adding the following:

const getName = () => { return 'Jim'; }; exports.getName = getName;

Now create an index.js file in the same folder and add this:

const user = require('./user'); console.log(`User: ${user.getName()}`);

Run the program using node index.js and you should see the following output to the terminal:

User: Jim

So what has gone on here? Well, if you look at the user.js file, you'll notice that we're defining a getName function, then using the exports keyword to make it available for import elsewhere. Then in the index.js file, we're importing this function and executing it. Also notice that in the require statement, the module name is prefixed with ./, as it's a local file. Also note that there's no need to add the file extension.

Exporting Multiple Methods and Values

We can export multiple methods and values in the same way:

const getName = () => { return 'Jim'; }; const getLocation = () => { return 'Munich'; }; const dateOfBirth = '12.01.1982'; exports.getName = getName; exports.getLocation = getLocation; exports.dob = dateOfBirth;

And in index.js:

const user = require('./user'); console.log( `${user.getName()} lives in ${user.getLocation()} and was born on ${user.dob}.` );

The code above produces this:

Jim lives in Munich and was born on 12.01.1982.

Notice how the name we give the exported dateOfBirth variable can be anything we fancy (dob in this case). It doesn't have to be the same as the original variable name.

Variations in Syntax

I should also mention that it's possible to export methods and values as you go, not just at the end of the file.

For example:

exports.getName = () => { return 'Jim'; }; exports.getLocation = () => { return 'Munich'; }; exports.dob = '12.01.1982';

And thanks to destructuring assignment, we can cherry-pick what we want to import:

const { getName, dob } = require('./user'); console.log( `${getName()} was born on ${dob}.` );

As you might expect, this logs:

Jim was born on 12.01.1982.

The post Understanding module.exports and exports in Node.js appeared first on SitePoint.

Remote Work: Tips, Tricks and Best Practices for Success

Nov 25, 2019

Description:

Remote Work: Tips, Tricks and Best Practices for Success

There are lots of advantages to working away from the office, both for developers and for the companies that employ them. Think about avoiding the daily commute, the cost of office space, the cost of living in or traveling to the city for rural or international workers, the inconvenience of office work for differently abled people or those with unusual family or life responsibilities, and the inflexibility of trying to keep traditional 9–5 hours as more and more of our workforce adapts to the gig economy by taking on second jobs or part-time side hustles.

Remote work can help address many of these difficulties while improving team transparency and putting the focus of work back on the reasons you were hired for your job in the first place. It also opens up a world of possibilities for companies, including broader recruitment opportunities, improved worker transparency, lower infrastructure costs, and more scalable business models based on actual worker productivity.

But working from home or from a co-working space can also present new challenges, and learning how to recognize them and overcome them can make the difference between a productive, happy work experience and endless hours of misery, loneliness, and frustration.

Think I’m being overdramatic? Let me explain.

I’ve had the experience of being the remote worker who didn’t think he needed to pay attention to interpersonal office dynamics, or keep track of his time and accomplishments. I’ve worked long into the evening because I didn’t notice when the work day ended. I’ve struggled with inefficient tools that might have worked fine in an office environment, but proved woefully inadequate when it came to remote collaboration.

So I’ve learned to cope with these issues myself, and for years I’ve been coaching engineering teams by working on-site, remotely, and in various combinations of the two. Depending on your situation, there are a number of useful tools, tricks, and fundamental practices that can make your remote working experience so much better than it is today — for yourself, your team, your manager, and your company.

Remote Self-management

For better or for worse, most of us are used to having a manager decide what our working hours are, where we’re going to sit, what equipment we’re going to use, and whom we’re going to collaborate with. That’s a luxury that comes with the convenience of working together in a shared space, where management can supervise and coordinate our efforts. It may not always feel luxurious, but you may well find yourself missing the support of an attentive manager when you start working from home and realize you have to make these decisions for yourself.

Set a Schedule and Stick to It!

The first tip I offer for anyone starting out a remote role is to establish the hours you’re going to work, and stick to those hours.

It’s not as easy as it sounds. When you’re working from home, you won’t have all of the little cues that come with office life to tell you when to pause for lunch, when to take a break, and when to stop working for the day. Working from a co-working space or a coffee shop can help, but it’s not the same as having your colleagues around you to exert that not-so-subtle social pressure. What’s more, if you start to feel anxious about whether people at the office know how hard you’re working, you may find yourself wanting to compensate by putting in a few extra hours.

Some people find that it's easier to compartmentalize remote work by using a co-working space, simulating the effect of going out to work and then coming back at the end of the day. If you're working from home, your professional and personal lives can start to blend. You’re going to find yourself washing the dishes, feeding the cat, answering the telephone, and attending to all the other chores that crop up in your living space. And you know what? That’s just fine! … as long as it doesn't start to interfere with your productivity on the job.

Decide up front on your morning and afternoon work hours and respect them. Write them down somewhere you won’t forget to see them, so you can’t pretend you don’t know what they are. The same advice applies to teams working together in an office or people using co-working spaces, but it’s even more critical if you're working from home.

Let Everyone Know When and Where You'll Be Working

Building on the theme of scheduling, a remote worker needs to let anyone who works with them know how to get in touch, and may need to encourage that kind of contact regularly. Remote workers can feel isolated or even excluded — left out of important decisions because people at the office simply forgot about them. It's up to the person who’s working off site to make their existence known throughout the work day, and to advocate for visibility.

This can be easier said than done. One of the advantages of remote work is the ability to focus without interruption for extended periods. Sometimes just the knowledge that the bubble of isolation can be broken is enough to foster distraction and make it harder to concentrate. This can make the experience draining and unproductive, and negate most of the advantages.

It's not a bad idea to start off just using email to stay in touch with the team for typical group communications. And as a personal productivity tip, try to establish set times during the day to check that email — perhaps three or so over the course of a day. Checking your email constantly can establish a pattern of behavior that puts your attention at the mercy of anyone who wants to reach out to you for anything at any time. Email is asynchronous by nature, so use that to your advantage when you're working from home.

Apart from direct communication, it's good to get your team using a messaging tool such as Slack or HipChat. These services can run in the background on every team member's computer, or even on their mobile devices, providing a shared space for inter-team, intra-team, and cross-functional messaging. There are secure ways for companies to make services like these available for sensitive internal communications, and they can work both on site and off site, establishing virtual shared message boards to keep teams aligned.

The post Remote Work: Tips, Tricks and Best Practices for Success appeared first on SitePoint.

Create a Toggle Switch in React as a Reusable Component

Nov 21, 2019

Description:

Implementing a Toggle Switch in React JS as a Reusable Component

In this article, we're going to create an iOS-inspired toggle switch using React components. By the end, we'll have built a simple demo React App that uses our custom toggle switch component.

We could use third-party libraries for this, but building from scratch allows us to better understand how our code is working and allows us to customize our component completely.

Forms provide a major means for enabling user interactions. The checkbox is traditionally used for collecting binary data — such as yes or no, true or false, enable or disable, on or off, etc. Although some modern interface designs steer away from form fields when creating toggle switches, I'll stick with them here due to their greater accessibility.

Here's a screenshot of the component we'll be building:

The final result

Getting Started

We can start with a basic HTML checkbox input form element with its necessary properties set:

<input type="checkbox" name="name" id="id" />

To build around it, we might need an enclosing <div> with a class, a <label> and the <input /> control itself. Adding everything, we might get something like this:

<div class="toggle-switch"> <input type="checkbox" class="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label class="toggle-switch-label" for="toggleSwitch"> Toggle Me! </label> </div>

In time, we can get rid of the label text and use the <label> tag to check or uncheck the checkbox input control. Inside the <label>, let's add two <span>s that help us construct the switch holder and the toggling switch itself:

<div class="toggle-switch"> <input type="checkbox" class="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label class="toggle-switch-label" for="toggleSwitch"> <span class="toggle-switch-inner"></span> <span class="toggle-switch-switch"></span> </label> </div> Converting to a React Component

Now that we know what needs to go into the HTML, all we need to do is to convert the HTML into a React component. Let's start with a basic component here. We'll make this a class component, and then we'll convert it into hooks, as it's easier for new developers to follow state than useState:

import React, { Component } from "react"; class ToggleSwitch extends Component { render() { return ( <div className="toggle-switch"> <input type="checkbox" className="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label className="toggle-switch-label" htmlFor="toggleSwitch"> <span className="toggle-switch-inner" /> <span className="toggle-switch-switch" /> </label> </div> ); } } export default ToggleSwitch;

At this point, it's not possible to have multiple toggle switch sliders on the same view or same page due to the repetition of ids. We could leverage React's way of componentization here, but in this instance, we'll be using props to dynamically populate the values:

import React, { Component } from "react"; class ToggleSwitch extends Component { render() { return ( <div className="toggle-switch"> <input type="checkbox" className="toggle-switch-checkbox" name={this.props.Name} id={this.props.Name} /> <label className="toggle-switch-label" htmlFor={this.props.Name}> <span className="toggle-switch-inner" /> <span className="toggle-switch-switch" /> </label> </div> ); } } export default ToggleSwitch;

The this.props.Name will populate the values of id, name and for (note that it is htmlFor in React JS) dynamically, so that you can pass different values to the component and have multiple of them on the same page. Also, the <span> tag doesn't have an ending </span> tag; instead it's closed in the starting tag like <span />, and this is completely fine.

The post Create a Toggle Switch in React as a Reusable Component appeared first on SitePoint.

Compile-time Immutability in TypeScript

Nov 21, 2019

Description:

Compile-time Immutability in TypeScript

TypeScript allows us to decorate specification-compliant ECMAScript with type information that we can analyze and output as plain JavaScript using a dedicated compiler. In large-scale projects, this sort of static analysis can catch potential bugs ahead of resorting to lengthy debugging sessions, let alone deploying to production. However, reference types in TypeScript are still mutable, which can lead to unintended side effects in our software.

In this article, we'll look at possible constructs where prohibiting references from being mutated can be beneficial.

Primitives vs Reference Types

JavaScript defines two overarching groups of data types:

Primitives: low-level values that are immutable (e.g. strings, numbers, booleans etc.) References: collections of properties, representing identifiable heap memory, that are mutable (e.g. objects, arrays, Map etc.)

Say we declare a constant, to which we assign a string:

const message = 'hello';

Given that strings are primitives and are thus immutable, we’re unable to directly modify this value. It can only be used to produce new values:

console.log(message.replace('h', 'sm')); // 'smello' console.log(message); // 'hello'

Despite invoking replace() upon message, we aren't modifying its memory. We're merely creating a new string, leaving the original contents of message intact.

Mutating the indices of message is a no-op by default, but will throw a TypeError in strict mode:

'use strict'; const message = 'hello'; message[0] = 'j'; // TypeError: 0 is read-only

Note that if the declaration of message were to use the let keyword, we would be able to replace the value to which it resolves:

let message = 'hello'; message = 'goodbye';

It's important to highlight that this is not mutation. Instead, we're replacing one immutable value with another.

Mutable References

Let's contrast the behavior of primitives with references. Let's declare an object with a couple of properties:

const me = { name: 'James', age: 29, };

Given that JavaScript objects are mutable, we can change its existing properties and add new ones:

me.name = 'Rob'; me.isTall = true; console.log(me); // Object { name: "Rob", age: 29, isTall: true };

Unlike primitives, objects can be directly mutated without being replaced by a new reference. We can prove this by sharing a single object across two declarations:

const me = { name: 'James', age: 29, }; const rob = me; rob.name = 'Rob'; console.log(me); // { name: 'Rob', age: 29 }

JavaScript arrays, which inherit from Object.prototype, are also mutable:

const names = ['James', 'Sarah', 'Rob']; names[2] = 'Layla'; console.log(names); // Array(3) [ 'James', 'Sarah', 'Layla' ] What's the Issue with Mutable References?

Consider we have a mutable array of the first five Fibonacci numbers:

const fibonacci = [1, 2, 3, 5, 8]; log2(fibonacci); // replaces each item, n, with Math.log2(n); appendFibonacci(fibonacci, 5, 5); // appends the next five Fibonacci numbers to the input array

This code may seem innocuous on the surface, but since log2 mutates the array it receives, our fibonacci array will no longer exclusively represent Fibonacci numbers as the name would otherwise suggest. Instead, fibonacci would become [0, 1, 1.584962500721156, 2.321928094887362, 3, 13, 21, 34, 55, 89]. One could therefore argue that the names of these declarations are semantically inaccurate, making the flow of the program harder to follow.

Pseudo-immutable Objects in JavaScript

Although JavaScript objects are mutable, we can take advantage of particular constructs to deep clone references, namely spread syntax:

const me = { name: 'James', age: 29, address: { house: '123', street: 'Fake Street', town: 'Fakesville', country: 'United States', zip: 12345, }, }; const rob = { ...me, name: 'Rob', address: { ...me.address, house: '125', }, }; console.log(me.name); // 'James' console.log(rob.name); // 'Rob' console.log(me === rob); // false

The spread syntax is also compatible with arrays:

const names = ['James', 'Sarah', 'Rob']; const newNames = [...names.slice(0, 2), 'Layla']; console.log(names); // Array(3) [ 'James', 'Sarah', 'Rob' ] console.log(newNames); // Array(3) [ 'James', 'Sarah', 'Layla' ] console.log(names === newNames); // false

Thinking immutably when dealing with reference types can make the behavior of our code clearer. Revisiting the prior mutable Fibonacci example, we could avoid such mutation by copying fibonacci into a new array:

const fibonacci = [1, 2, 3, 5, 8]; const log2Fibonacci = [...fibonacci]; log2(log2Fibonacci); appendFibonacci(fibonacci, 5, 5);

Rather than placing the burden of creating copies on the consumer, it would be preferable for log2 and appendFibonacci to treat their inputs as read-only, creating new outputs based upon them:

const PHI = 1.618033988749895; const log2 = (arr: number[]) => arr.map(n => Math.log2(2)); const fib = (n: number) => (PHI ** n - (-PHI) ** -n) / Math.sqrt(5); const createFibSequence = (start = 0, length = 5) => new Array(length).fill(0).map((_, i) => fib(start + i + 2)); const fibonacci = [1, 2, 3, 5, 8]; const log2Fibonacci = log2(fibonacci); const extendedFibSequence = [...fibonacci, ...createFibSequence(5, 5)];

By writing our functions to return new references in favor of mutating their inputs, the array identified by the fibonacci declaration remains unchanged, and its name remains a valid source of context. Ultimately, this code is more deterministic.

The post Compile-time Immutability in TypeScript appeared first on SitePoint.

Getting Started with Puppeteer

Nov 14, 2019

Description:

Getting Started with Puppeteer

Browser developer tools provide an amazing array of options for delving under the hood of websites and web apps. These capabilities can be further enhanced and automated by third-party tools. In this article, we'll look at Puppeteer, a Node-based library for use with Chrome/Chromium.

The puppeteer website describes Puppeteer as

a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

Puppeteer is made by the team behind Google Chrome, so you can be pretty sure it will be well maintained. It lets us perform common actions on the Chromium browser, programmatically through JavaScript, via a simple and easy-to-use API.

With Puppeteer, you can:

scrape websites generate screenshots of websites including SVG and Canvas create PDFs of websites crawl an SPA (single-page application) access web pages and extract information using the standard DOM API generate pre-rendered content — that is, server-side rendering automate form submission automate performance analysis automate UI testing like Cypress test chrome extensions

Puppeteer does nothing new that Selenium, PhantomJS (which is now deprecated), and the like do, but it provides a simple and easy-to-use API and provides a great abstraction so we don't have to worry about the nitty-gritty details when dealing with it.

It's also actively maintained so we get all the new features of ECMAScript as Chromium supports it.

Prerequisites

For this tutorial, you need a basic knowledge of JavaScript, ES6+ and Node.js.

You must also have installed the latest version of Node.js.

We’ll be using yarn throughout this tutorial. If you don’t have yarn already installed, install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 12.12.0 yarn 1.19.1 puppeteer 2.0.0 Installation

To use Puppeteer in your project, run the following command in the terminal:

$ yarn add puppeteer

Note: when you install Puppeteer, it downloads a recent version of Chromium (~170MB macOS, ~282MB Linux, ~280MB Win) that is guaranteed to work with the API. To skip the download, see Environment variables.

If you don't need to download Chromium, then you can install puppeteer-core:

$ yarn add puppeteer-core

puppeteer-core is intended to be a lightweight version of Puppeteer for launching an existing browser installation or for connecting to a remote one. Be sure that the version of puppeteer-core you install is compatible with the browser you intend to connect to.

Note: puppeteer-core is only published from version 1.7.0.

Usage

Puppeteer requires at least Node v6.4.0, but we're going to use async/await, which is only supported in Node v7.6.0 or greater, so make sure to update your Node.js to the latest version to get all the goodies.

Let's dive into some practical examples using Puppeteer. In this tutorial, we'll be:

generating a screenshot of Unsplash using Puppeteer creating a PDF of Hacker News using Puppeteer signing in to Facebook using Puppeteer 1. Generate a Screenshot of Unsplash using Puppeteer

It's really easy to do this with Puppeteer. Go ahead and create a screenshot.js file in the root of your project. Then paste in the following code:

const puppeteer = require('puppeteer') const main = async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto('https://unsplash.com') await page.screenshot({ path: 'unsplash.png' }) await browser.close() } main()

Firstly, we require the puppeteer package. Then we call the launch method on it that initializes the instance. This method is asynchronous as it returns a Promise. So we await for it to get the browser instance.

Then we call newPage on it and go to Unsplash and take a screenshot of it and save the screenshot as unsplash.png.

Now go ahead and run the above code in the terminal by typing:

$ node screenshot

Unsplash - 800px x 600px resolution

Now after 5–10 seconds you'll see an unsplash.png file in your project that contains the screenshot of Unsplash. Notice that the viewport is set to 800px x 600px as Puppeteer sets this as the initial page size, which defines the screenshot size. The page size can be customized with Page.setViewport().

Let's change the viewport to be 1920px x 1080px. Insert the following code before the goto method:

await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1, })

Now go ahead and also change the filename from unsplash.png to unsplash2.png in the screenshot method like so:

await page.screenshot({ path: 'unsplash2.png' })

The whole screenshot.js file should now look like this:

const puppeteer = require('puppeteer') const main = async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1, }) await page.goto('https://unsplash.com') await page.screenshot({ path: 'unsplash2.png' }) await browser.close() } main()

Unsplash - 1920px x 1080px

The post Getting Started with Puppeteer appeared first on SitePoint.

Getting Started with the React Native Navigation Library

Nov 13, 2019

Description:

Getting Started with the React Native Navigation Library

One of the most important aspects of React Native app development is the navigation. It’s what allows users to get to the pages they’re looking for. That’s why it’s important to choose the best navigation library to suit your needs.

If your app has a lot of screens with relatively complex UI, it might be worth exploring React Native Navigation instead of React Navigation. This is because there will always be performance bottlenecks with React Navigation, since it works off the same JavaScript thread as the rest of the app. The more complex your UI, the more data has to be passed to that bridge, which can potentially slow it down.

In this tutorial, we’ll be looking at the React Native Navigation library by Wix, an alternative navigation library for those who are looking for a smoother navigation performance for their React Native apps.

Prerequisites

Knowledge of React and React Native is required to follow this tutorial. Prior experience with a navigation library such as React Navigation is optional.

App Overview

In order to demonstrate how to use the library, we’ll be creating a simple app that uses it. The app will have five screens in total:

Initialization: this serves as the initial screen for the app. If the user is logged in, it will automatically navigate to the home screen. If not, the user is navigated to the login screen. Login: this allows the user to log in so they can view the home, gallery, and feed. To simplify things, the login will just be mocked; no actual authentication code will be involved. From this screen, the user can also go to the forgot-password screen. ForgotPassword: a filler screen, which asks for the user’s email address. This will simply be used to demonstrate stack navigation. Home: the initial screen that the user will see when they log in. From here, they can also navigate to either the gallery or feed screens via a bottom tab navigation. Gallery: a filler screen which shows a photo gallery UI. Feed: a filler screen which shows a news feed UI.

Here’s what the app will look like:

React Native Navigation demo gif

You can find the source code of the sample app on this GitHub repo.

Bootstrapping the App

Let’s start by generating a new React Native project:

react-native init RNNavigation --version react-native@0.57.8

Note: we’re using a slightly older version of React Native, because React Native Navigation doesn’t work well with later versions of React Native. React Native Navigation hasn’t really kept up with the changes in the core of React Native since version 0.58. The only version known to work flawlessly with React Native is the version we’re going to use. If you check the issues on their repo, you’ll see various issues on version 0.58 and 0.59. There might be workarounds on those two versions, but the safest bet is still version 0.57.

As for React Native version 0.60, the core team has made a lot of changes. One of them is the migration to AndroidX, which aims to make it clearer which packages are bundled with the Android operating system. This essentially means that if a native module uses any of the old packages that got migrated to the new androidx.* package hierarchy, it will break. There are tools such as jetifier, which allows for migration to AndroidX. But this doesn’t ensure React Native Navigation will work.

Next, install the dependencies of the app:

react-native-navigation — the navigation library that we’re going to use. @react-native-community/async-storage — for saving data to the app’s local storage. react-native-vector-icons — for showing icons for the bottom tab navigation. yarn add react-native-navigation @react-native-community/async-storage react-native-vector-icons

In the next few sections, we’ll be setting up the packages we just installed.

Setting up React Native Navigation

First, we’ll set up the React Native Navigation library. The instructions that we’ll be covering here are also in the official documentation. Unfortunately, it’s not written in a very friendly way for beginners, so we’ll be covering it in more detail.

Note: the demo project includes an Android and iOS folders as well. You can use those as a reference if you encounter any issues with setting things up.

Since the name of the library is very long, I’ll simply refer to it as RNN from now on.

Android Setup

In this section, we’ll take a look at how you can set up RNN for Android. Before you proceed, it’s important to update all the SDK packages to the latest versions. You can do that via the Android SDK Manager.

settings.gradle

Add the following to your android/settings.gradle file:

include ':react-native-navigation' project(':react-native-navigation').projectDir = new File(rootProject.projectDir, '../node_modules/react-native-navigation/lib/android/app/') Gradle Wrapper Properties

In your android/gradle/wrapper/gradle-wrapper.properties, update Gradle’s distributionUrl to use version 4.4 if it’s not already using it:

distributionUrl=https\://services.gradle.org/distributions/gradle-4.4-all.zip build.gradle

Next, in your android/build.gradle file, add mavenLocal() and mavenCentral() under buildscript -> repositories:

buildscript { repositories { google() jcenter() // add these: mavenLocal() mavenCentral() } }

Next, update the classpath under the buildscript -> dependencies to point out to the Gradle version that we need:

buildscript { repositories { ... } dependencies { classpath 'com.android.tools.build:gradle:3.0.1' } }

Under allprojects -> repositories, add mavenCentral() and JitPack. This allows us to pull the data from React Native Navigation’s JitPack repository:

allprojects { allprojects { repositories { mavenLocal() google() jcenter() mavenCentral() // add this maven { url 'https://jitpack.io' } // add this } }

Next, add the global config for setting the build tools and SDK versions for Android:

allprojects { ... } ext { buildToolsVersion = "27.0.3" minSdkVersion = 19 compileSdkVersion = 26 targetSdkVersion = 26 supportLibVersion = "26.1.0" }

Lastly, we’d still want to keep the default react-native run-android command when compiling the app, so we have to set Gradle to ignore other flavors of React Native Navigation except the one we’re currently using (reactNative57_5). Ignoring them ensures that we only compile the specific version we’re depending on:

ext { ... } subprojects { subproject -> afterEvaluate { if ((subproject.plugins.hasPlugin('android') || subproject.plugins.hasPlugin('android-library'))) { android { variantFilter { variant -> def names = variant.flavors*.name if (names.contains("reactNative51") || names.contains("reactNative55") || names.contains("reactNative56") || names.contains("reactNative57")) { setIgnore(true) } } } } } }

Note: there are four other flavors of RNN that currently exist. These are the ones we’re ignoring above:

reactNative51 reactNative55 reactNative56 reactNative57 android/app/build.gradle

On your android/app/build.gradle file, under android -> compileOptions, make sure that the source and target compatibility version is 1.8:

android { defaultConfig { ... } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } }

Then, in your dependencies, include react-native-navigation as a dependency:

dependencies { implementation fileTree(dir: "libs", include: ["*.jar"]) implementation "com.android.support:appcompat-v7:${rootProject.ext.supportLibVersion}" implementation "com.facebook.react:react-native:+" implementation project(':react-native-navigation') // add this }

Lastly, under android -> defaultConfig, set the missingDimensionStrategy to reactNative57_5. This is the version of RNN that’s compatible with React Native 0.57.8:

defaultConfig { applicationId "com.rnnavigation" minSdkVersion rootProject.ext.minSdkVersion targetSdkVersion rootProject.ext.targetSdkVersion missingDimensionStrategy "RNN.reactNativeVersion", "reactNative57_5" // add this versionCode 1 versionName "1.0" ndk { abiFilters "armeabi-v7a", "x86" } }

The post Getting Started with the React Native Navigation Library appeared first on SitePoint.

How TypeScript Makes You a Better JavaScript Developer

Nov 12, 2019

Description:

TypeScript

What do Airbnb, Google, Lyft and Asana have in common? They've all migrated several codebases to TypeScript.

Whether it is eating healthier, exercising, or sleeping more, our humans love self-improvement. The same applies to our careers. If someone shared tips for improving as a programmer, your ears would perk.

In this article, the goal is to be that someone. We know TypeScript will make you a better JavaScript developer for several reasons. You'll feel confident when writing code. Fewer errors will appear in your production code. It will be easier to refactor code. You'll write fewer tests (yay!). And overall, you'll have a better coding experience in your editor.

What Even Is TypeScript?

TypeScript is a compiled language. You write TypeScript and it compiles to JavaScript. Essentially, you're writing JavaScript, but with a type system. JavaScript developers should have a seamless transition because the languages are the same, except for a few quirks.

Here's a basic example of a function in both JavaScript and TypeScript:

function helloFromSitePoint(name) { return `Hello, ${name} from SitePoint!` } function helloFromSitePoint(name: string) { return `Hello, ${name} from SitePoint!` }

Notice how the two are almost identical. The difference is the type annotation on the "name" parameter in TypeScript. This tells the compiler, "Hey, make sure when someone calls this function, they only pass in a string." We won't go into much depth but this example should illustrate the bare minimal of TypeScript.

How Will TypeScript Make Me Better?

TypeScript will improve your skills as a JavaScript developer by:

giving you more confidence, catching errors before they hit production, making it easier to refactor code, saving you time from writing tests, providing you with a better coding experience.

Let's explore each of these a bit deeper.

The post How TypeScript Makes You a Better JavaScript Developer appeared first on SitePoint.

Face Detection and Recognition with Keras

Nov 7, 2019

Description:

Face Detection and Recognition with Keras

If you're a regular user of Google Photos, you may have noticed how the application automatically extracts and groups faces of people from the photos that you back up to the cloud.

Face Recognition in the Google Photos web applicationFace Recognition in the Google Photos web application

A photo application such as Google's achieves this through the detection of faces of humans (and pets too!) in your photos and by then grouping similar faces together. Detection and then classification of faces in images is a common task in deep learning with neural networks.

In the first step of this tutorial, we'll use a pre-trained MTCNN model in Keras to detect faces in images. Once we've extracted the faces from an image, we'll compute a similarity score between these faces to find if they belong to the same person.

Prerequisites

Before you start with detecting and recognizing faces, you need to set up your development environment. First, you need to "read" images through Python before doing any processing on them. We'll use the plotting library matplotlib to read and manipulate images. Install the latest version through the installer pip:

pip3 install matplotlib

To use any implementation of a CNN algorithm, you need to install keras. Download and install the latest version using the command below:

pip3 install keras

The algorithm that we'll use for face detection is MTCNN (Multi-Task Convoluted Neural Networks), based on the paper Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (Zhang et al., 2016). An implementation of the MTCNN algorithm for TensorFlow in Python3.4 is available as a package. Run the following command to install the package through pip:

pip3 install mtcnn

To compare faces after extracting them from images, we'll use the VGGFace2 algorithm developed by the Visual Geometry Group at the University of Oxford. A TensorFlow-based Keras implementation of the VGG algorithm is available as a package for you to install:

pip3 install keras_vggface

While you may feel the need to build and train your own model, you'd need a huge training dataset and vast processing power. Since this tutorial focuses on the utility of these models, it uses existing, trained models by experts in the field.

Now that you've successfully installed the prerequisites, let's jump right into the tutorial!

Step 1: Face Detection with the MTCNN Model

The objectives in this step are as follows:

retrieve images hosted externally to a local server read images through matplotlib's imread() function detect and explore faces through the MTCNN algorithm extract faces from an image. 1.1 Store External Images

You may often be doing an analysis from images hosted on external servers. For this example, we'll use two images of Lee Iacocca, the father of the Mustang, hosted on the BBC and The Detroit News sites.

To temporarily store the images locally for our analysis, we'll retrieve each from its URL and write it to a local file. Let's define a function store_image for this purpose:

import urllib.request def store_image(url, local_file_name): with urllib.request.urlopen(url) as resource: with open(local_file_name, 'wb') as f: f.write(resource.read())

You can now simply call the function with the URL and the local file in which you'd like to store the image:

store_image('https://ichef.bbci.co.uk/news/320/cpsprodpb/5944/production/_107725822_55fd57ad-c509-4335-a7d2-bcc86e32be72.jpg', 'iacocca_1.jpg') store_image('https://www.gannett-cdn.com/presto/2019/07/03/PDTN/205798e7-9555-4245-99e1-fd300c50ce85-AP_080910055617.jpg?width=540&height=&fit=bounds&auto=webp', 'iacocca_2.jpg')

After successfully retrieving the images, let's detect faces in them.

1.2 Detect Faces in an Image

For this purpose, we'll make two imports — matplotlib for reading images, and mtcnn for detecting faces within the images:

from matplotlib import pyplot as plt from mtcnn.mtcnn import MTCNN

Use the imread() function to read an image:

image = plt.imread('iacocca_1.jpg')

Next, initialize an MTCNN() object into the detector variable and use the .detect_faces() method to detect the faces in an image. Let's see what it returns:

detector = MTCNN() faces = detector.detect_faces(image) for face in faces: print(face)

For every face, a Python dictionary is returned, which contains three keys. The box key contains the boundary of the face within the image. It has four values: x- and y- coordinates of the top left vertex, width, and height of the rectangle containing the face. The other keys are confidence and keypoints. The keypoints key contains a dictionary containing the features of a face that were detected, along with their coordinates:

{'box': [160, 40, 35, 44], 'confidence': 0.9999798536300659, 'keypoints': {'left_eye': (172, 57), 'right_eye': (188, 57), 'nose': (182, 64), 'mouth_left': (173, 73), 'mouth_right': (187, 73)}} 1.3 Highlight Faces in an Image

Now that we've successfully detected a face, let's draw a rectangle over it to highlight the face within the image to verify if the detection was correct.

To draw a rectangle, import the Rectangle object from matplotlib.patches:

from matplotlib.patches import Rectangle

Let's define a function highlight_faces to first display the image and then draw rectangles over faces that were detected. First, read the image through imread() and plot it through imshow(). For each face that was detected, draw a rectangle using the Rectangle() class.

Finally, display the image and the rectangles using the .show() method. If you're using Jupyter notebooks, you may use the %matplotlib inline magic command to show plots inline:

def highlight_faces(image_path, faces): # display image image = plt.imread(image_path) plt.imshow(image) ax = plt.gca() # for each face, draw a rectangle based on coordinates for face in faces: x, y, width, height = face['box'] face_border = Rectangle((x, y), width, height, fill=False, color='red') ax.add_patch(face_border) plt.show()

Let's now display the image and the detected face using the highlight_faces() function:

highlight_faces('iacocca_1.jpg', faces) Detected face in an image of Lee IacoccaDetected face in an image of Lee Iacocca. Source: BBC

Let's display the second image and the face(s) detected in it:

image = plt.imread('iacocca_2.jpg') faces = detector.detect_faces(image) highlight_faces('iacocca_2.jpg', faces) The Detroit NewsThe Detroit News

In these two images, you can see that the MTCNN algorithm correctly detects faces. Let's now extract this face from the image to perform further analysis on it.

1.4 Extract Face for Further Analysis

At this point, you know the coordinates of the faces from the detector. Extracting the faces is a fairly easy task using list indices. However, the VGGFace2 algorithm that we use needs the faces to be resized to 224 x 224 pixels. We'll use the PIL library to resize the images.

The function extract_face_from_image() extracts all faces from an image:

from numpy import asarray from PIL import Image def extract_face_from_image(image_path, required_size=(224, 224)): # load image and detect faces image = plt.imread(image_path) detector = MTCNN() faces = detector.detect_faces(image) face_images = [] for face in faces: # extract the bounding box from the requested face x1, y1, width, height = face['box'] x2, y2 = x1 + width, y1 + height # extract the face face_boundary = image[y1:y2, x1:x2] # resize pixels to the model size face_image = Image.fromarray(face_boundary) face_image = face_image.resize(required_size) face_array = asarray(face_image) face_images.append(face_array) return face_images extracted_face = extract_face_from_image('iacocca_1.jpg') # Display the first face from the extracted faces plt.imshow(extracted_face[0]) plt.show()

Here is how the extracted face looks from the first image.

Extracted and resized face from first imageExtracted and resized face from first image

The post Face Detection and Recognition with Keras appeared first on SitePoint.

React Native End-to-end Testing and Automation with Detox

Nov 6, 2019

Description:

Introducing Detox, a React Native End-to-end Testing and Automation Framework

Detox is an end-to-end testing and automation framework that runs on a device or a simulator, just like an actual end user.

Software development demands fast responses to user and/or market needs. This fast development cycle can result (sooner or later) in parts of a project being broken, especially when the project grows so large. Developers get overwhelmed with all the technical complexities of the project, and even the business people start to find it hard to keep track of all scenarios the product caters for.

In this scenario, there’s a need for software to keep on top of the project and allow us to deploy with confidence. But why end-to-end testing? Aren’t unit testing and integration testing enough? And why bother with the complexity that comes with end-to-end testing?

First of all, the complexity issue has been tackled by most of the end-to-end frameworks, to the extent that some tools (whether free, paid or limited) allow us to record the test as a user, then replay it and generate the necessary code. Of course, that doesn’t cover the full range of scenarios that you’d be able to address programmatically, but it’s still a very handy feature.

Want to learn React Native from the ground up? This article is an extract from our Premium library. Get an entire collection of React Native books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month.

End-to-end Integration and Unit Testing

End-to-end testing versus integration testing versus unit testing: I always find the word “versus” drives people to take camps — as if it’s a war between good and evil. That drives us to take camps instead of learning from each other and understanding the why instead of the how. The examples are countless: Angular versus React, React versus Angular versus Vue, and even more, React versus Angular versus Vue versus Svelte. Each camp trash talks the other.

jQuery made me a better developer by taking advantage of the facade pattern $('') to tame the wild DOM beast and keep my mind on the task at hand. Angular made me a better developer by taking advantage of componentizing the reusable parts into directives that can be composed (v1). React made me a better developer by taking advantage of functional programming, immutability, identity reference comparison, and the level of composability that I don’t find in other frameworks. Vue made me a better developer by taking advantage of reactive programming and the push model. I could go on and on, but I’m just trying to demonstrate the point that we need to concentrate more on the why: why this tool was created in the first place, what problems it solves, and whether there are other ways of solving the same problems.

As You Go Up, You Gain More Confidence

end-to-end testing graph that demonstrates the benefit of end-to-end testing and the confidence it brings

As you go more on the spectrum of simulating the user journey, you have to do more work to simulate the user interaction with the product. But on the other hand, you get the most confidence because you’re testing the real product that the user interacts with. So, you catch all the issues—whether it’s a styling issue that could cause a whole section or a whole interaction process to be invisible or non interactive, a content issue, a UI issue, an API issue, a server issue, or a database issue. You get all of this covered, which gives you the most confidence.

Why Detox?

We discussed the benefit of end-to-end testing to begin with and its value in providing the most confidence when deploying new features or fixing issues. But why Detox in particular? At the time of writing, it’s the most popular library for end-to-end testing in React Native and the one that has the most active community. On top of that, it’s the one React Native recommends in its documentation.

The Detox testing philosophy is “gray-box testing”. Gray-box testing is testing where the framework knows about the internals of the product it’s testing.In other words, it knows it’s in React Native and knows how to start up the application as a child of the Detox process and how to reload it if needed after each test. So each test result is independent of the others.

Prerequisites macOS High Sierra 10.13 or above Xcode 10.1 or above

Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Node 8.3.0 or above:

brew update && brew install node

Apple Simulator Utilities: brew tap wix/brew and brew install applesimutils

Detox CLI 10.0.7 or above:

npm install -g detox-cli See the Result in Action

First, let’s clone a very interesting open-source React Native project for the sake of learning, then add Detox to it:

git clone https://github.com/ahmedam55/movie-swiper-detox-testing.git cd movie-swiper-detox-testing npm install react-native run-ios

Create an account on The Movie DB website to be able to test all the application scenarios. Then add your username and password in .env file with usernamePlaceholder and passwordPlaceholder respectively:

isTesting=true username=usernamePlaceholder password=passwordPlaceholder

After that, you can now run the tests:

detox test

Note that I had to fork this repo from the original one as there were a lot of breaking changes between detox-cli, detox, and the project libraries. Use the following steps as a basis for what to do:

Migrate it completely to latest React Native project. Update all the libraries to fix issues faced by Detox when testing. Toggle animations and infinite timers if the environment is testing. Add the test suite package. Setup for New Projects Add Detox to Our Dependencies

Go to your project’s root directory and add Detox:

npm install detox --save-dev Configure Detox

Open the package.json file and add the following right after the project name config. Be sure to replace movieSwiper in the iOS config with the name of your app. Here we’re telling Detox where to find the binary app and the command to build it. (This is optional. We can always execute react-native run-ios instead.) Also choose which type of simulator: ios.simulator, ios.none, android.emulator, or android.attached. And choose which device to test on:

{ "name": "movie-swiper-detox-testing", // add these: "detox": { "configurations": { "ios.sim.debug": { "binaryPath": "ios/build/movieSwiper/Build/Products/Debug-iphonesimulator/movieSwiper.app", "build": "xcodebuild -project ios/movieSwiper.xcodeproj -scheme movieSwiper -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build", "type": "ios.simulator", "name": "iPhone 7 Plus" } } } }

Here’s a breakdown of what the config above does:

Execute react-native run-ios to create the binary app. Search for the binary app at the root of the project: find . -name "*.app". Put the result in the build directory.

Before firing up the test suite, make sure the device name you specified is available (for example, iPhone 7). You can do that from the terminal by executing the following:

xcrun simctl list

Here’s what it looks like:

device-list

Now that weve added Detox to our project and told it which simulator to start the application with, we need a test runner to manage the assertions and the reporting—whether it’s on the terminal or otherwise.

Detox supports both Jest and Mocha. We’ll go with Jest, as it has bigger community and bigger feature set. In addition to that, it supports parallel test execution, which could be handy to speed up the end-to-end tests as they grow in number.

Adding Jest to Dev Dependencies

Execute the following to install Jest:

npm install jest jest-cli --save-dev

The post React Native End-to-end Testing and Automation with Detox appeared first on SitePoint.

How to Build Your First Amazon Alexa Skill

Nov 5, 2019

Description:

How to Build Your First Amazon Alexa Skill

Out of the box, Alexa supports a number of built-in skills, such as adding items to your shopping list or requesting a song. However, developers can build new custom skills by using the Alexa Skill Kit (ASK).

The ASK, a collection of APIs and tools, handles the hard work related to voice interfaces, including speech recognition, text-to-speech encoding, and natural language processing. ASK helps developers build skills quickly and easily.

In short, the sole reason that Alexa can understand a user’s voice commands is that it has skills defined. Every Alexa skill is a piece of software designed to understand voice commands. Also, each Alexa skill has its own logic defined that creates an appropriate response for the voice command. To give you an idea of some existing Alexa skills, they include:

ordering pizza at Domino's Pizza calling for an Uber telling you your horoscope

So as said, we can develop our own custom skills fitted to our need with the Alexa Skill Kit, a collection of APIs and tools designed for this purpose. The ASK includes tools like speech recognition, text-to-speech encoding, and natural language processing. The kit should get any developer started quickly with developing their own custom skill.

In this article, you’ll learn how to create a basic "get a fact" Alexa skill. In short, we can ask Alexa to present us with a random cat fact. The complete code for completing our task can be found on GitHub. Before we get started, let's make sure we understand the Alexa skill terminology.

Mastering Alexa Skill Terminology

First, let's learn how a user can interact with a custom skill. This will be important for understanding the different concepts related to skills.

In order to activate a particular skill, the user has to call Alexa and ask to open a skill. For example: "Alexa, open cat fact". By doing this, we're calling the invocation name of the skill. Basically, the invocation name can be seem as the name of the application.

Now that we've started the right skill, we have access to the voice intents/commands the skill understands. As we want to keep things simple, we define a "Get Cat Fact" intent. However, we need to provide sample sentences to trigger the intent. An intent can be triggered by many example sentences, also called utterances. For example, a user might say "Give a fact". Therefore, we define the following example sentences:

"Tell a fact" "Give a cat fact" "Give a fact"

It's even possible to combine the invocation name with an intent like this: "Alexa, ask Cat Fact to give a fact".

Now that we know the difference between an invocation name and intent, let's move on to creating your first Alexa skill.

Creating an Amazon Developer Account

To get started, we need an Amazon Developer Account. If you have one, you can skip this section.

Signing up for an Amazon Developer account is a three-step process. Amazon requires some personal information, accepting the terms of service, and providing a payment method. The advantage of signing up for an Amazon Developer account is that you get access to a plethora of other Amazon services. Once the signup has been successfully completed, you'll see the Amazon Developer dashboard.

Log yourself in to the dashboard and click on the Developer Console button in the top-right corner.

Open Developer Console

Next up, we want to open the Alexa Skills Kit.

Open Alexa Skills Kit

If you were unable to open the Alexa Skills Kit, use this link.

In the following section, we'll create our actual skill.

Creating Our First Custom Alexa Skill

Okay, we're set to create our first custom Alexa skill. Click the blue button Create Skill to open up the menu for creating a new skill.

Create Skill Button

Firstly, it will prompt us for the name of our skill. As you already know, we want random cat facts and therefore call the skill custom cat fact (we can't use cat fact as that's a built-in skill for Alexa devices). Next, it prompts us to pick a model for your skill. We can choose between some predefined models or go for a custom model that gives us full flexibility. As we don't want to be dealing with code we don't need, we go for the Custom option.

Note: If you choose a predefined skill, you get a list of interaction models and example sentences (utterances). However, even the custom skill is equipped with the most basic intents like Cancel, Help, NavigateHome, and Stop.

Pick Skill name

Next, we need to pick a way to host our skill. Again, we don't want to overcomplicate things and pick the Alexa-Hosted (Node.js) option. This means we don't have to run a back end ourselves that requires some effort to make it "Alexa compliant". It means you have to format the response according to the Amazon Alexa standards for a device to understand this. The Alexa-hosted option will:

host skills in your account up to the AWS Free Tier limits and get you started with a Node.js template. You will gain access to an AWS Lambda endpoint, 5 GB of media storage with 15 GB of monthly data transfer, and a table for session persistence.

Pick host method

Okay, now that all settings are in place, you can click the Create Skill button in the top-right corner of the screen. This button will generate the actual skill in our Amazon Developer account.

Modifying Your First Alexa Skill

Now if you navigate to the Alexa Developer Console, you'll find your skill listed there. Click the edit button to start modifying the skill.

Modify Alexa Skill

Next, Amazon will display the build tab for the Cat Fact skill. On the left-hand side, you'll find a list of intents that are defined for the skill. As said before, by default the Alexa Skills Kit generates a Cancel, Stop, Help, and NavigateHome intent. The first three are helpful for a user that wants to quit the skill or doesn't know how to use it. The last one, NavigateHome, is only used for complex skills that involve multiple steps.

Interaction model

Step 1: Verify Invocation Name

First of all, let's verify if the invocation name for the skill is correct. The name should say "custom cat fact".

In case you change the name, make sure to hit the Save Model button on top of the page.

Invocation name

The post How to Build Your First Amazon Alexa Skill appeared first on SitePoint.

How to Build a Web App with GraphQL and React

Nov 1, 2019

Description:

In this tutorial, we'll learn to build a web application with React and GraphQL. We'll consume an API available from graphql-pokemon and serve it from this link, which allows you to get information about Pokémon.

GraphQL is a query language for APIs and a runtime for fulfilling those queries created by Facebook. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

In this tutorial, we'll only learn the front end of a GraphQL application that makes use of Apollo for fetching data from a ready GraphQL API hosted on the web.

Let's get started with the prerequisites!

Prerequisites

There are a few prerequisites for this tutorial:

recent versions of Node.js and npm installed on your system knowledge of JavaScript/ES6 familiarity with React

If you don't have Node and npm installed on your development machine, you can simply download the binaries for your system from the official website. You can also use NVM, a POSIX-compliant bash script to manage multiple active Node.js versions.

Installing create-react-app

Let's install the create-react-app tool that allows you to quickly initialize and work with React projects.

Open a new terminal and run the following command:

npm install -g create-react-app

Note: You may need to use sudo before your command in Linux and macOS or use a command prompt with administrator rights if you get EACCESS errors when installing the package globally on your machine. You can also simply fix your npm permissions.

At the time of writing, this installs create-react-app v3.1.1.

Creating a React Project

Now we're ready to create our React project.

Go back to your terminal and run the following command:

create-react-app react-pokemon

Next, navigate into your project's folder and start the local development server:

cd react-pokemon npm start

Go to http://localhost:3000 in your web browser to see your app up and running.

This is a screenshot of the app at this point:

The current state of our app

Installing Apollo Client

Apollo Client is a complete data management solution that's commonly used with React, but can be used with any other library or framework.

Apollo provides intelligent caching that enables it to be a single source of truth for the local and remote data in your application.

You'll need to install the following packages in your React project to work with Apollo:

graphql: the JavaScript reference implementation for GraphQL apollo-client: a fully-featured caching GraphQL client with integrations for React, Angular, and more apollo-cache-inmemory: the recommended cache implementation for Apollo Client 2.0 apollo-link-http: the most common Apollo Link, a system of modular components for GraphQL networking react-apollo: this package allows you to fetch data from your GraphQL server and use it in building complex and reactive UIs using the React framework graphql-tag: this package provides helpful utilities for parsing GraphQL queries such as gql tag.

Open a new terminal and navigate to your project's folder, then run the following commands:

npm install graphql --save npm install apollo-client --save npm install apollo-cache-inmemory --save npm install apollo-link-http --save npm install react-apollo --save npm install graphql-tag --save

Now that we've installed the necessary packages, we need to create an instance of ApolloClient.

Open the src/index.js file and add the following code:

import { ApolloClient } from 'apollo-client'; import { InMemoryCache } from 'apollo-cache-inmemory'; import { HttpLink } from 'apollo-link-http'; const cache = new InMemoryCache(); const link = new HttpLink({ uri: 'https://graphql-pokemon.now.sh/' }) const client = new ApolloClient({ cache, link })

We first create an instance of InMemoryCache, then an instance of HttpLink and we pass in our GraphQL API URI. Next, we create an instance of ApolloClient and we provide the cache and link instances.

Connect the Apollo Client to React Components

After creating the instance of ApolloClient, we need to connect it to our React component(s).

We'll use the new Apollo hooks, which allows us to easily bind GraphQL operations to our UI.

We can connect Apollo Client to our React app by simply wrapping the root App component with the ApolloProvider component — which is exported from the @apollo/react-hooks package — and passing the client instance via the client prop.

The ApolloProvider component is similar to React's Context provider. It wraps your React app and places the client in the context, which enables you to access it from anywhere in your app.

Now let's import the ApolloProvider component in our src/index.js file and wrap the App component as follows:

The post How to Build a Web App with GraphQL and React appeared first on SitePoint.

How to Build Your First Discord Bot with Node.js

Oct 30, 2019

Description:

Nowadays, bots are being used for automating various tasks. Since the release of Amazon's Alexa devices, the hype surrounding automation bots has only started to grow. Besides Alexa, other communication tools like Discord and Telegram offer APIs to develop custom bots.

This article will solely focus on creating your first bot with the exposed Discord API. Maybe the most well-known Discord bot is the Music Bot. The music bot lets you type a song name and the bot will attach a new user to your channel who plays the requested song. It’s a commonly used bot among younger people on gaming or streaming servers.

Let’s get started with creating a custom Discord bot.

Prerequisites Node.js v10 or higher installed (basic knowledge) a Discord account and Discord client basic knowledge of using a terminal Step 1: Setup Test Server

First of all, we need a test server on which we can later test our Discord bot. We can create a new server by clicking the plus icon in the left bottom corner.

click create server

A pop-up will be displayed that asks you if you want to join a server or create a new one. Of course, we want to create a new server.

select create server

Next, we need to input the name for our server. To keep things simple, I've named the server discord_playground. If you want, you can change the server location depending on where you're located to get a better ping.

server name

If everything went well, you should see your newly created server.

new server

Step 2: Generating Auth Token

When we want to control our bot via code, we need to register the bot first under our Discord account.

To register the bot, go to the Discord Developers Portal and log in with your account.

After logging in, you should be able to see the dashboard. Let's create a new application by clicking the New Application button.

developer dashboard

Next, you'll see a pop-up that asks you to input a name for your application. Let's call our bot my-greeter-bot. By clicking the Create button, Discord will create an API application.

create application

When the application has been created, you'll see the overview of the newly created my-greeter-bot application. You'll see information like a client ID and client secret. This secret will be used later as the authorization token.

overview greeter bot

Now, click on the Bot menu option in the Settings menu. Discord will build our my-greeter-bot application and add a bot user to it.

add bot

When the bot has been built, you get an overview of your custom bot. Take a look at the Token section. Copy this authorization token and write it down somewhere, as we'll need it later to connect to our bot user.

bot tab overview

The post How to Build Your First Discord Bot with Node.js appeared first on SitePoint.

What Is Functional Programming?

Oct 29, 2019

Description:

As a programmer, you probably want to write elegant, maintainable, scalable, predictable code. The principles of functional programming, or FP, can significantly aid in these goals.

Functional programming is a paradigm, or style, that values immutability, first-class functions, referential transparency, and pure functions. If none of those words makes sense to you, don't worry! We're going to break down all this terminology in this article.

Functional programming evolved from lambda calculus, a mathematical system built around function abstraction and generalization. As a result, a lot of functional programming languages look very mathematical. Good news, though: you don't need to use a functional programming language to bring functional programming principles to your code. In this post, we'll use JavaScript, which has a lot of features that make it amenable to functional programming while not being tied to that paradigm.

The Core Principles of Functional Programming

Now that we've discussed what functional programming is, let's talk about the core principles behind FP.

The post What Is Functional Programming? appeared first on SitePoint.

6 Tools for Debugging React Native

Oct 28, 2019

Description:

Debugging is an essential part of software development. It’s through debugging that we know what’s wrong and what’s right, what works and what doesn’t. Debugging provides the opportunity to assess our code and fix problems before they’re pushed to production.

debugging featured image

In the React Native world, debugging may be done in different ways and with different tools, since React Native is composed of different environments (iOS and Android), which means there’s an assortment of problems and a variety of tools needed for debugging.

Thanks to the large number of contributors to the React Native ecosystem, many debugging tools are available. In this brief guide, we’ll explore the most commonly used of them, starting with the Developer Menu.

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. — Brian W. Kernighan

The Developer Menu

the developer menu

The in-app developer menu is your first gate for debugging React Native, it has many options which we can use to do different things. Let’s break down each option.

Reload: reloads the app Debug JS Remotely: opens a channel to a JavaScript debugger Enable Live Reload: makes the app reload automatically on clicking Save Enable Hot Reloading: watches for changes accrued in a changed file Toggle Inspector: toggles an inspector interface, which allows us to inspect any UI element on the screen and its properties, and presents and interface that has other tabs like networking, which shows us the HTTP calls, and a tab for performance.

The post 6 Tools for Debugging React Native appeared first on SitePoint.

Why Your Agency Should Offer Managed Cloud Hosting to Clients

Oct 28, 2019

Description:

This article was created in partnership with Cloudways. Thank you for supporting the partners who make SitePoint possible.

When it comes to end-to-end services, digital agencies offer an impressive range. From requirement analysis to post-deployment maintenance, these agencies do everything to make sure that their clients are able to fully leverage their projects for maximum business efficiency.

In this backdrop, many agencies (particularly those that deal with web-based projects) also offer hosting as part of their services to their customers. While small and up-and-coming digital agencies might not have hosting on their service brochure, mid-tier and top-shelf agencies see hosting as an integral service offering to their clients.

Setting up Hosting for Customers

For a web-based project, web hosting is an essential component that determines the success (and failure) of the project. Since the agency has developed the project, many clients trust the agency-managed hosting for their project.

High-performance applications (online stores and CRM in particular) demand a hosting solution that’s able to keep pace with the high request volume and a large number of concurrent connections. Clients with these projects can’t compromise on the post-deployment performance of the applications. As such, agencies prefer an in-house hosting setup that caters to the specific requirements of the projects.

Agencies Benefit From In-house Hosting

Before going into what benefits agencies get from an in-house hosting setup, it’s important to understand the major requirements of high-performance projects. Without going too much into the details, in-house hosting solutions are set up to make sure that custom-built projects continue to perform on the following parameters:

the number of visitors per hour/day/month the number of simultaneous visitors the maximum number of connections allowed the number of simultaneous requests/orders the size and complexity of the products catalog (number of products, product categories, attributes) the content requirements and traffic on content assets such as blogs the volume of search queries on the site the size and connections on the database

With in-house hosting solutions, agencies (and their clients) get a whole range of benefits such as those outlined below.

Custom Hardware and Software

Hardware requirements for custom, high-performance projects generally include three components: CPU, RAM and Disk Space. Since each project has custom requirements that are often not available in off-the-shelf hosting solutions available in the market, agencies opt for setting up in-house hardware platforms for their customers.

Custom hardware setups usually cost more than the conventional, commercially available hosting hardware architecture. The cost of setting up and maintaining the hosting architecture is usually the responsibility of the dev agency, which usually bills the client for these services.

Another related (and in my opinion, more important) requirement of these projects is a custom environment that comprises an OS layer and a facilitation layer made of servers and caches. A custom environment allows agencies to build their projects without worrying about conflicts with the OS and server software required to execute the codebase. Thanks to in-house hosting, digital agencies can completely customize the OS and server layer to the project specifications.

End-to-End Management of Project Hosting

Project requirements change and clients often revise their requirements and scope. These changes also impact the hosting requirements and specifications. Since the hosting process is being managed in-house, the agency can take proactive actions to improve hosting setup specifications and ensure continued performance for the application.

Passive Income Stream

In almost all cases, agency-managed hosting solutions are built and maintained on the client’s dollars. The agency proposes hosting setup specifications and sets it up once the client pays for it. Once the setup is active, the client pays for the maintenance and upkeep of the hosting solution. This is a passive income channel that is often an important supplement to agency revenues.

Challenges In-agency Managed Hosting

Despite the benefits, managing an in-house hosting setup can prove to be a drag on the agency operations. In particular, agency-managed hosting causes the following challenges for the business processes.

Hosting Architecture Requires Continuous Attention

Since this is an in-house managed hosting solution, it’s obvious that the agency is responsible for keeping both the hardware and software layers operational. While the hardware layer (the physical server machines and the networking equipment) have a lower ratio of failure, it’s important to note that the software components of the hosting solution require detailed attention and upkeep.

Both hardware and software vendors regularly release patches that fix bugs and enhance product functionality. In many cases, these patches are mission-critical and essential for the continued performance of the project’s hosting. In in-house managed hosting, this is the responsibility of a dedicated team that performs no other function.

The Constant Need for Security

Web servers are the prime target of cybercriminals because of the wealth of information and user data on them. The problem with server security is that it's a full-time function that requires specialists on the team. The same goes for clients’ applications (CMSs such as WordPress are especially vulnerable) that could potentially open up security loopholes in the server and application security. Not many agencies can afford a dedicated infosec expert on the payroll. Thus, there's always the danger that clients’ applications can get hacked because the agency-managed hosting is unable to maintain the required security standards.

Sysadmins Prove to be an Overhead

Sysadmins are among the highest-paid professions in the ICT industry, and rightly so! They manage entire data centers and handle all aspects of hosting servers from provisioning to maintenance. The problem with sysadmins is the high recruitment and operational costs of these professionals. Thus, hiring a sysadmin to manage in-house hosting is a serious decision that's out of the budget of many dev agencies.

Deviation from the Core Business

Digital agencies are in the business of building applications and custom projects that create value for their clients. An in-house hosting solution requires competence that lies outside the normal scope of the dev agencies. In addition, managing hosting solutions require expenses that eat away into profits without generating enough revenue to justify their inclusion in business offerings.

Shared Hosting is a False Start

The good news is that many agencies are aware of the issues with in-house, agency-managed hosting and have come to realize that this is not the ideal solution for managing clients’ hosting focused expectations.

However, since the clients’ requirements continue to grow and the need for hosting solutions for custom-developed apps is on the rise, a number of agencies have turned to shared hosting as an alternative to agency managed in-house hosting solutions.

When opting for shared hosting solutions, agencies try to reduce the cost of hosting solutions while providing a comparable hosting solution to the clients.

Before going into the description of why shared hosting solutions are in fact counterproductive for dev agencies, it's important to understand how shared hosting solutions work.

Shared Hosting in a Nutshell

As the name implies, shared hosting is a solution where several websites/applications are hosted on a single physical server. This means that the physical resources (CPU, RAM, Disk space and bandwidth (in some cases) get shared among the websites hosted on the server.

While this is not a bad solution per se, it's not the right ft for high-performance applications. These applications have a minimum server resource requirements that often exceed the allocated “quota” allocated by the shared hosting server.

Many digital agencies try to integrate shared hosting solutions in their customer-focused services by eliminating sysadmins from the equation and asking the developers to manage the hosting servers for the clients.

The post Why Your Agency Should Offer Managed Cloud Hosting to Clients appeared first on SitePoint.

How to Build a Tic Tac Toe Game with Svelte

Oct 24, 2019

Description:

How to Build a Tic Tac Toe Game with Svelte

Svelte is a next generation way of building user interfaces.

While frameworks like React, Vue and Angular do the bulk of their work in the browser, Svelte takes it to the next level. It does its work when you build the app and it compiles your Svelte app to efficient vanilla JavaScript. So you get the best of both worlds. You write your code in Svelte which makes it easy to read, re-use and all the other benefits you get when you use a framework, and it makes for a blazing-fast web app as it complies down to vanilla JavaScript so that you don’t have the overhead of the JavaScript framework you’re using.

Svelte allows you to write less code. It also doesn’t use the concept of the Virtual DOM popularized by React. It instead surgically updates the DOM when the state of the app changes so the app starts fast and stays fast.

Prerequisites

For this tutorial, you need a basic knowledge of HTML, CSS and JavaScript.

You must also have installed the latest version of Node.js.

We’ll also be using npx, which comes installed by default with Node.js.

Throughout this tutorial we’ll be using yarn. If you don’t have yarn already installed, install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 12.10.0 npx 6.11.3 yarn 1.17.3 Getting Started with Svelte

In this tutorial, we’ll be building a Tic Tac Toe game in Svelte. By the end, you’ll be able to get up and running quickly with Svelte and get started in building your own apps in Svelte.

To get started, we must scaffold our app using degit. degit is more or less the same as git clone, but much quicker. You can learn more about it here.

Go ahead and make a new project by typing the following in the terminal:

$ npx degit sveltejs/template tic-tac-toe-svelte

npx lets you use the degit command without installing it globally.

Before npx, we would have to do the two following steps to achieve the same result:

$ npm install --global degit $ degit sveltejs/template tic-tac-toe-svelte

Thanks to npx, we don’t bloat our global namespace, and we always use the latest version of degit.

degit clones the repo https://github.com/sveltejs/template into a tic-tac-toe-svelte folder.

Go ahead into the tic-tac-toe-svelte directory and install the dependencies by typing the following in the terminal:

$ cd tic-tac-toe-svelte $ yarn

Now run the application by typing the following in the terminal:

$ yarn dev

Now open up the browser and go to http://localhost:5000 and you should see the following:

Svelte - Hello World

If you go into the src/ folder, you’ll see two files, App.svelte and main.js. main.js is the entry point of a Svelte app.

Open up the main.js and you should see the following:

import App from './App.svelte'; const app = new App({ target: document.body, props: { name: 'world' } }); export default app;

The above file imports App.svelte and instantiates it using a target element. It puts the component on the DOM’s document.body. It also passes name props to the App component. This prop will be accessed in App.svelte.

Components in Svelte are written using .svelte files which contain HTML, CSS and JavaScript. This will look familiar if youse worked with Vue.

Now open up App.svelte and you should see the following:

<script> export let name; </script> <style> h1 { color: purple; } </style> <h1>Hello {name}!</h1>

Firstly, we have the script tag inside, in which we have a named export called name. This should be similar to the prop mentioned in main.js.

Then we have a style tag that lets us style all the elements in that particular file, which is scoped to that file only so there’s no issue of cascading.

Then, at the bottom, we have an h1 tag, inside which we have Hello {name}!. The name in curly brackets will be replaced by the actual value. This is called value interpolation. That’s why Hello world! is printed on the screen.

Basic Structure of a Svelte Component

All .svelte files will basically have the following structure:

<script> /* Javascript logic */ </script> <style> /* CSS styles */ </style> <!-- HTML markup -->

The HTML markup will have some additional Svelte-specific syntax, but the rest is just plain HTML, CSS and JavaScript.

Making Tic Tac Toe in Svelte

Let’s get started with building our Tic Tac Toe game.

Replace main.js with the following:

import App from './App.svelte' const app = new App({ target: document.body, }) export default app

We’ve basically removed the props property from App component instantiation.

Now replace App.svelte with the following:

<script> const title = "Tic Tac Toe"; </script> <svelte:head> <title>{title}</title> </svelte:head> <h1>{title}</h1>

Here, we initialize a constant variable title with a string Tic Tac Toe.

Then, in the markup below, we use a special Svelte syntax, svelte:head, to set the title property in the head tag.

This is basically similar to doing this:

<head> <title>Tic Tac Toe</title> </head>

But the advantage of using the svelte:head syntax is that the title can be changed at runtime.

We then use the same title property in our h1 tag. It should now look like this:

Svelte - Tic Tac Toe

Now create two other files in the src/ directory named Board.svelte and Square.svelte.

Open Square.svelte and paste in the following:

<script> export let value; </script> <style> .square { flex: 1 0 25%; width: 50px; height: 70px; background-color: whitesmoke; border: 2px solid black; margin: 5px; padding: 5px; font-size: 20px; text-align: center; } .square:hover { border: 2px solid red; } </style> <button class="square">{value}</button>

Basically, we’re creating a button and styling it.

Now open up Board.svelte and paste the following:

<script> import Square from "./Square.svelte"; let squares = [null, null, null, null, null, null, null, null, null]; </script> <style> .board { display: flex; flex-wrap: wrap; width: 300px; } </style> <div class="board"> {#each squares as square, i} <Square value={i} /> {/each} </div>

Here we’ve imported the Square component. We’ve also initialized the squares array, which will contain our X and 0’s data which is currently null.

The post How to Build a Tic Tac Toe Game with Svelte appeared first on SitePoint.

How to Install Docker on Windows 10 Home

Oct 22, 2019

Description:

How to Install Docker on Windows 10 Home

If you've ever tried to install Docker for Windows, you've probably came to realize that the installer won't run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you plan on running Windows Containers, you'll need a specific version and build of Windows Server. Check out the Windows container version compatibility matrix for details.

99.999% of the time, you only need a Linux Container, since it supports software built using open-source and .NET technologies. In addition, Linux Containers can run on any distro and on popular CPU architectures, including x86_64, ARM and IBM.

In this tutorial, I'll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine. Here's a list of software you'll need to build and run Docker containers:

Docker Machine: a CLI tool for installing Docker Engine on virtual hosts Docker Engine: runs on top of the Linux Kernel; used for building and running containers Docker Client: a CLI tool for issuing commands to Docker Engine via REST API Docker Compose: a tool for defining and running multi-container applications

I'll show how to perform the installation in the following environments:

On Windows using Git Bash On Windows Subsystem for Linux 2 (running Ubuntu 18.04)

First, allow me to explain how the Docker installation will work on Windows.

How it Works

As you probably know, Docker requires a Linux kernel to run Linux Containers. For this to work on Windows, you'll need to set up a Linux virtual machine to run as guest in Windows 10 Home.

docker windows home

Setting up the Linux VM can be done manually. The easiest way is to use Docker Machine to do this work for you by running a single command. This Docker Linux VM can either run on your local system or on a remote server. Docker client will use SSH to communicate with Docker Engine. Whenever you create and run images, the actual process will happen within the VM, not on your host (Windows).

Let's dive into the next section to set up the environment needed to install Docker.

Initial Setup

You may or may not have the following applications installed on your system. I'll assume you don't. If you do, make sure to upgrade to the latest versions. I'm also assuming you're running the latest stable version of Windows. At the time of writing, I'm using Windows 10 Home version 1903. Let's start installing the following:

Install Git Bash for Windows. This will be our primary terminal for running Docker commands.

Install Chocolatey, a package manager for Windows. It will make the work of installing the rest of the programs easier.

Install VirtualBox and its extension. Alternatively, If you have finished installing Chocolatey, you can simply execute this command inside an elevated PowerShell terminal:

C:\ choco install virtualbox

If you'd like to try running Docker inside the WSL2 environment, you'll need to set up WSL2 first. You can follow this tutorial for step-by-step instructions.

Docker Engine Setup

Installing Docker Engine is quite simple. First we need to install Docker Machine.

Install Docker Machine by following instructions on this page. Alternatively, you can execute this command inside an elevated PowerShell terminal:

C:\ choco install docker-machine

Using Git Bash terminal, use Docker Machine to install Docker Engine. This will download a Linux image containing the Docker Engine and have it run as a VM using VirtualBox. Simply execute the following command:

$ docker-machine create --driver virtualbox default

Next, we need to configure which ports are exposed when running Docker containers. Doing this will allow us to access our applications via localhost<:port>. Feel free to add as many as you want. To do this, you'll need to launch Oracle VM VirtualBox from your start menu. Select default VM on the side menu. Next click on Settings > Network > Adapter 1 > Port Forwarding. You should find the ssh forwarding port already set up for you. You can add more like so:

docker vm ports

Next, we need to allow Docker to mount volumes located on your hard drive. By default, you can only mount from the C://Users/ directory. To add a different path, simply go to Oracle VM VirtualBox GUI. Select default VM and go to Settings > Shared Folders. Add a new one by clicking the plus symbol. Enter the fields like so. If there's an option called Permanent, enable it.

docker vm volumes

To get rid of the invalid settings error as seen in the above screenshot, simply increase Video Memory under the Display tab in the settings option. Video memory is not important in this case, as we'll run the VM in headless mode.

To start the Linux VM, simply execute this command in Git Bash. The Linux VM will launch. Give it some time for the boot process to complete. It shouldn't take more than a minute. You'll need to do this every time you boot your host OS:

$ docker-machine start vbox

Next, we need to set up our Docker environment variables. This is to allow the Docker client and Docker Compose to communicate with the Docker Engine running in the Linux VM, default. You can do this by executing the commands in Git Bash:

# Print out docker machine instance settings $ docker-machine env default # Set environment variables using Linux 'export' command $ eval $(docker-machine env default --shell linux)

You'll need to set the environment variables every time you start a new Git Bash terminal. If you'd like to avoid this, you can copy eval output and save it in your .bashrc file. It should look something like this:

export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.101:2376" export DOCKER_CERT_PATH="C:\Users\Michael Wanyoike\.docker\machine\machines\default" export DOCKER_MACHINE_NAME="default" export COMPOSE_CONVERT_WINDOWS_PATHS="true"

IMPORTANT: for the DOCKER_CERT_PATH, you'll need to change the Linux file path to a Windows path format. Also take note that there's a chance the IP address assigned might be different from the one you saved every time you start the default VM.

In the next section, we'll install Docker Client and Docker Compose.

The post How to Install Docker on Windows 10 Home appeared first on SitePoint.

6 Top VPNs for Web Developers to Choose

Oct 21, 2019

Description:

This article was created by VPN Review. Thank you for supporting the partners who make SitePoint possible.

Do you need a VPN? You're probably familiar with them, but they work like this: you connect to a server, all your internet queries pass through it as an intermediary, and they are passed on from this server to the external net. You can use a VPN to, for example, bypass geographic content restrictions by connecting to a server in the appropriate country.

For web developers, these features are essentially useful. First, when it's necessary to connect to sensitive development servers from outside networks, a VPN can make it safer by encrypting the traffic passing through your local network. This reduces the danger that the information you send to those servers will be accessed by hackers. And while testing geolocation features, a VPN allows you to connect as a user from the appropriate destination.

The downside is that the VPN needs to have access to your packets in order to relay them. You need to choose VPNs carefully, and it's especially worth being careful with new services, and even free ones. Note that it's important to use your company's VPN for development, or a service that has been approved for usage by your team.

To choose a service one should carefully read all the VPN Reviews on the web, search for the experts' opinions, and consider the product's reputation.

Let's analyze the most widespread VPN services:

The post 6 Top VPNs for Web Developers to Choose appeared first on SitePoint.

A Beginner’s Guide to Keras: Digit Recognition in 30 Minutes

Oct 18, 2019

Description:

A Beginner's Guide to Keras

Over the last decade, the use of artificial neural networks (ANNs) has increased considerably. People have used ANNs in medical diagnoses, to predict Bitcoin prices, and to create fake Obama videos! With all the buzz about deep learning and artificial neural networks, haven't you always wanted to create one for yourself? In this tutorial, we'll create a model to recognize handwritten digits.

We use the keras library for training the model in this tutorial. Keras is a high-level library in Python that is a wrapper over TensorFlow, CNTK and Theano. By default, Keras uses a TensorFlow backend by default, and we'll use the same to train our model.

Artificial Neural Networks Artificial neural networkImage source

An artificial neural network is a mathematical model that converts a set of inputs to a set of outputs through a number of hidden layers. An ANN works with hidden layers, each of which is a transient form associated with a probability. In a typical neural network, each node of a layer takes all nodes of the previous layer as input. A model may have one or more hidden layers.

ANNs receive an input layer to transform it through hidden layers. An ANN is initialized by assigning random weights and biases to each node of the hidden layers. As the training data is fed into the model, it modifies these weights and biases using the errors generated at each step. Hence, our model "learns" the pattern when going through the training data.

Convoluted Neural Networks

In this tutorial, we're going to identify digits — which is a simple version of image classification. An image is essentially a collection of dots or pixels. A pixel can be identified through its component colors (RGB). Therefore, the input data of an image is essentially a 2D array of pixels, each representing a color.

If we were to train a regular neural network based on image data, we'd have to provide a long list of inputs, each of which would be connected to the next hidden layer. This makes the process difficult to scale up.

Convolutional Neural Network ArchitectureConvolutional Neural Network Architecture

In a convoluted neural network (CNN), the layers are arranged in a 3D array (X-axis coordinate, Y-axis coordinate and color). Consequently, a node of the hidden layer would only be connected to a small region in the vicinity of the corresponding input layer, making the process far more efficient than a traditional neural network. CNNs, therefore, are popular when it comes to working with images and videos.

Convolutional Neural Network LayersConvolutional Neural Network Layers

The various types of layers in a CNN are as follows:

convolutional layers: these run input through certain filters, which identify features in the image pooling layers: these combine convolutional features, helping in feature reduction flatten layers: these convert an N-dimentional layer to a 1D layer classification layer: the final layer, which tells us the final result.

Let's now explore the data.

Explore MNIST Dataset

As you may have realized by now, we need labelled data to train any model. In this tutorial, we'll use the MNIST dataset of handwritten digits. This dataset is a part of the Keras package. It contains a training set of 60000 examples, and a test set of 10000 examples. We'll train the data on the training set and validate the results based on the test data. Further, we'll create an image of our own to test whether the model can correctly predict it.

First, let's import the MNIST dataset from Keras. The .load_data() method returns both the training and testing datasets:

from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data()

Let's try to visualize the digits in the dataset. If you're using Jupyter notebooks, use the following magic function to show inline Matplotlib plots:

%matplotlib inline

Next, import the pyplot module from matplotlib and use the .imshow() method to display the image:

import matplotlib.pyplot as plt image_index = 35 print(y_train[image_index]) plt.imshow(x_train[image_index], cmap='Greys') plt.show()

The label of the image is printed and then the image is displayed.

label printed and image displayed

Let's verify the sizes of the training and testing datasets:

print(x_train.shape) print(x_test.shape)

Notice that each image has the dimensions 28 x 28:

(60000, 28, 28) (10000, 28, 28)

Next, we may also wish to explore the dependent variable, stored in y_train. Let's print all labels until the digit that we visualized above:

print(y_train[:image_index + 1]) [5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 3 8 6 9 0 5] Cleaning Data

Now that we've seen the structure of the data, let's work on it further before creating the model.

To work with the Keras API, we need to reshape each image to the format of (M x N x 1). We'll use the .reshape() method to perform this action. Finally, normalize the image data by dividing each pixel value by 255 (since RGB value can range from 0 to 255):

# save input image dimensions img_rows, img_cols = 28, 28 x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) x_train /= 255 x_test /= 255

Next, we need to convert the dependent variable in the form of integers to a binary class matrix. This can be achieved by the to_categorical() function:

from keras.utils import to_categorical num_classes = 10 y_train = to_categorical(y_train, num_classes) y_test = to_categorical(y_test, num_classes)

We're now ready to create the model and train it!

The post A Beginner’s Guide to Keras: Digit Recognition in 30 Minutes appeared first on SitePoint.

8 Ways to Style React Components Compared

Oct 17, 2019

Description:

8 Ways to Style React Components Compared

I've been working with a couple of developers in my office on React JS projects, who have varied levels of React JS experience. We've been solving some crazy problems like handling the weird way Redux does state initialization and making an axios request payload work with PHP and understanding what goes on in the background. This article arose out of a question about how to style React components.

The Various Styling Approaches

There are various ways to style React Components. Choosing the right method for styling components isn't a perfect absolute. It's a specific decision that should serve your particular use case, personal preferences and above all, architectural goals of the way you work. For example, I make use of notifications in React JS using Noty, and the styling should be able to handle plugins too.

Some of my goals in answering the question included covering these:

Global namespacing Dependencies Reusability Scalability Dead-code Elimination

There seems to be about eight different ways of styling React JS components used widely in the industry for production level work:

Inline CSS Normal CSS CSS in JS Styled Components CSS Modules Sass & SCSS Less Stylable

For each method, I'll look at the need for dependencies, the difficulty level, and whether or not the approach is really a good one or not.

Inline CSS Dependencies: None Difficulty: Easy Approach: Worst

I don't think anyone needs an introduction to inline CSS. This is the CSS styling sent to the element directly using the HTML or JSX. You can include a JavaScript object for CSS in React components. There are a few restrictions like replacing every - with camelCase text. You can style them in two ways using JavaScript objects as shown in the example.

Example import React from "react"; const spanStyles = { color: "#fff", borderColor: "#00f" }; const Button = props => ( <button style={{ color: "#fff", borderColor: "#00f" }}> <span style={spanStyles}>Button Name</span> </button> ); Regular CSS Dependencies: None Difficulty: Easy Approach: Okay

Regular CSS is a common approach, arguably one step better than inline CSS. The styles can be imported to any number of pages and elements unlike inline CSS, which is applied directly to the particular element. Normal CSS has several advantages, such as decreasing the file size with a clean code structure.

You can maintain any number of style sheets, and it can be easier to change or customize styles when needed. But regular CSS might be a major problem if you're working on a bigger project with lots of people involved, especially without an agreed pattern to do styling in CSS.

Example a:link { color: gray; } a:visited { color: green; } a:hover { color: rebeccapurple; } a:active { color: teal; } More Information

You can read more about regular CSS usage of the W3C's Learning CSS page. There are many playgrounds like JS Bin - Collaborative JavaScript Debugging, JSFiddle, CodePen: Build, Test, and Discover Front-end Code, Repl.it - The world's leading online coding platform, etc. where you can try them out live and get the results in real time.

CSS in JS Dependencies: jss, jss-preset-default, jss-cli Difficulty: Easy Approach: Decent

CSS in JS is an authoring tool for CSS which allows you to use JavaScript to describe styles in a declarative, conflict-free and reusable way. It can compile in the browser, on the server side or at build time in Node. It uses JavaScript as a language to describe styles in a declarative and maintainable way. It's a high performance JS-to-CSS compiler which works at runtime and server-side. When thinking in components, you no longer have to maintain a bunch of style sheets. CSS-in-JS abstracts the CSS model to the component level, rather than the document level (modularity).

Example import React from "react"; import injectSheet from "react-jss"; // Create your Styles. Remember, since React-JSS uses the default preset, // most plugins are available without further configuration needed. const styles = { myButton: { color: "green", margin: { // jss-expand gives more readable syntax top: 5, // jss-default-unit makes this 5px right: 0, bottom: 0, left: "1rem" }, "& span": { // jss-nested applies this to a child span fontWeight: "bold" // jss-camel-case turns this into 'font-weight' } }, myLabel: { fontStyle: "italic" } }; const Button = ({ classes, children }) => ( <button className={classes.myButton}> <span className={classes.myLabel}>{children}</span> </button> ); // Finally, inject the stylesheet into the component. const StyledButton = injectSheet(styles)(Button); More Information

You can learn more about this approach in the JSS official documentation. There's also a way to try it out using their REPL (Read-eval-print Loop).

Styled Components Dependencies: styled-components Difficulty: Medium Approach: Decent

Styled-components is an example of the above-mentioned CSS in JS. It basically gives us CSS with other properties you wish we had in CSS like nesting. It also allows us to style the CSS under the variable created in JavaScript. You could normally create a React component along with the styles attached to it without having to create a separate file for CSS. Styled-components allows us to create custom reusable components which can be less of a hassle to maintain. Props can be used in styling the components in the same way it is passed in the React components. Props are used instead of classes in CSS and set the properties dynamically.

Example import React from "react"; import styled, { css } from "styled-components"; const Button = styled.button` cursor: pointer; background: transparent; font-size: 16px; border-radius: 3px; color: palevioletred; margin: 0 1em; padding: 0.25em 1em; transition: 0.5s all ease-out; ${props => props.primary && css` background-color: white; color: green; border-color: green; `}; `; export default Button; More Information

Styled-components has a detailed documentation and the site also provides a live editor where you can try out the code. Get more information on styled components at styled-components: Basics.

CSS Modules Dependencies: css-loader Difficulty: Tough (Uses Loader Configuration) Approach: Better

If you've ever felt like the CSS global scope problem takes up most of your time when you have to find what a particular style does, or if you've had to write a CSS file without organizing it properly to make the code work first, or if getting rid of the files gives you a slight nudge in your heart wondering if you might break the whole code, I feel you. CSS Modules make sure that all of the styles for a component are at one single place and apply to that particular component. This certainly solves the global scope problem of CSS. The composition feature acts as a weapon to represent shared styles between the states. It's similar to the mixin in Sass, making it possible to combine multiple groups of styles.

Example import React from "react"; import style from "./panel.css"; const Panel = () => ( <div className={style.panelDefault}> <div className={style.panelBody}>A Basic Panel</div> </div> ); export default Panel; .panelDefault { border-color: #ddd; } .panelBody { padding: 15px; } Sass & SCSS Dependencies: node-sass Difficulty: Easy Approach: Best

Sass claims that it's the most mature, stable, and powerful professional grade CSS extension language in the world. It's a CSS preprocessor, which adds special features such as variables, nested rules and mixins (sometimes referred to as “syntactic sugar”) into regular CSS. The aim is to make the coding process simpler and more efficient. Just like other programming languages, Sass allows the use of variables, nesting, partials, imports and functions, which add super powers to regular CSS.

Example $font-stack: 'Open Sans', sans-serif; $primary-color: #333; body { font: 100% $font-stack; color: $primary-color; } More Information

Learn more about using and installing Sass with a variety of programming languages from their official documentation at Sass: Syntactically Awesome Style Sheets. If you want to try something out, there’s a service called SassMeister - The Sass Playground! where you can play around with different features of Sass and SCSS.

The post 8 Ways to Style React Components Compared appeared first on SitePoint.

Improving the Customer Journey with Flatfile’s Data Importer

Oct 17, 2019

Description:

This article was created in partnership with Flatfile.io. Thank you for supporting the partners who make SitePoint possible.

Close your eyes and imagine what it is like to import data into your application. Now open them.

Does it look something like this?

When creating a new or improving an existing product, building useful features is, of course, priority number one. What many forget, though, is that in innovation is wasted when the product experience is not intuitive. When crafting the customer journey, a product manager must pay particular attention to the customer's first steps.

In many products, the data importing process is one of those early steps. Unfortunately, it is often a major pain point for the customer. This isn't all the PM's fault; we've come to expect data import to be a lousy software experience. So we keep sending customers to complex templates, long support articles, and cryptic error screens, often within the first few minutes of their journey.

Not anymore, though. Flatfile offers a simple solution: an intuitive, plug-and-play data importer.

What Is Flatfile?

Flatfile offers software companies an alternative to building their own data importers.

For users, that means no more jumping through hoops to upload their data. Now, they can use Flatfile's platform instead for a seamless, smooth, and supportive data importing experience. Flatfile is designed to support users of any technical skill level: firefighters, real estate agents, and data analysts all leverage the Flatfile Importer.

For PMs, that means no more worrying about handling the UX and engineering complexities of data import. Instead of planning a whole sprint - if not several - on building a custom solution, PMs can hand their engineering team Flatfile's documentation and get an elegant, crafted experience in a day. Put simply, Flatlife takes the pain out of building and maintaining a data importer, and lets product teams focus on innovative, differentiating features.

How Does Flatfile Work?

Flatlife lets users upload their spreadsheets with just a few clicks. They can also manually enter their data.

Once the data has been uploaded, Flatfile asks the user a few simple questions about how their spreadsheet matches to your product, ensuring that the data is aligned with the correct field (e.g. first name, last name, email address, etc.).

The final step is data repair, where the user can review and update their import to correct any data errors. These errors appear based on validations you can pre-define, ensuring the tidiness of data before it ever hits your product database.

Once these steps are complete, the user is back in your application, and you have a clean, structured set of JSON data that's easy to pull into any database.

Meghann, Product Lead at Digsy.ai says: "When we were looking for solutions, we knew we could either build it ourselves or try to find something. Our product lead at the time heard of Flatfile. He presented it to the team, and ultimately we decided to implement Flatfile. We didn't see anything else on the market."

Why Should You Choose Flatfile?

When a user is importing data to your product, they want to use it and they want to see its value. Don't let them get hung up on spreadsheet templates and intimidating documentation. Flatfile takes most organizations less than a week to implement, and it gives users a simple, smooth, and delightful data import experience. Get started with a 30 day free trial and see how Flatfile.io can improve your customer journey.

Visit Flatfile.io.

The post Improving the Customer Journey with Flatfile’s Data Importer appeared first on SitePoint.

Build a JavaScript Command Line Interface (CLI) with Node.js

Oct 16, 2019

Description:

Build a Node CLI

As great as Node.js is for “traditional” web applications, its potential uses are far broader. Microservices, REST APIs, tooling, working with the Internet of Things and even desktop applications: it’s got your back.

Another area where Node.js is really useful is for building command-line applications — and that’s what we’re going to be doing in this article. We’re going to start by looking at a number of third-party packages designed to help work with the command line, then build a real-world example from scratch.

What we’re going to build is a tool for initializing a Git repository. Sure, it’ll run git init under the hood, but it’ll do more than just that. It will also create a remote repository on GitHub right from the command line, allow the user to interactively create a .gitignore file, and finally perform an initial commit and push.

As ever, the code accompanying this tutorial can be found on our GitHub repo.

Build a Node CLI

Why Build a Command-line Tool with Node.js?

Before we dive in and start building, it’s worth looking at why we might choose Node.js to build a command-line application.

The most obvious advantage is that, if you’re reading this, you’re probably already familiar with it — and, indeed, with JavaScript.

Another key advantage, as we’ll see as we go along, is that the strong Node.js ecosystem means that among the hundreds of thousands of packages available for all manner of purposes, there are a number which are specifically designed to help build powerful command-line tools.

Finally, we can use npm to manage any dependencies, rather than have to worry about OS-specific package managers such as Aptitude, Yum or Homebrew.

Tip: that isn’t necessarily true, in that your command-line tool may have other external dependencies.

What We’re Going to Build: ginit

Ginit, our Node CLI in action

For this tutorial, We’re going to create a command-line utility which I’m calling ginit. It’s git init, but on steroids.

You’re probably wondering what on earth that means.

As you no doubt already know, git init initializes a git repository in the current folder. However, that’s usually only one of a number of repetitive steps involved in the process of hooking up a new or existing project to Git. For example, as part of a typical workflow, you may well:

initialise the local repository by running git init create a remote repository, for example on GitHub or Bitbucket — typically by leaving the command line and firing up a web browser add the remote create a .gitignore file add your project files commit the initial set of files push up to the remote repository.

There are often more steps involved, but we’ll stick to those for the purposes of our app. Nevertheless, these steps are pretty repetitive. Wouldn’t it be better if we could do all this from the command line, with no copy-pasting of Git URLs and such like?

So what ginit will do is create a Git repository in the current folder, create a remote repository — we’ll be using GitHub for this — and then add it as a remote. Then it will provide a simple interactive “wizard” for creating a .gitignore file, add the contents of the folder and push it up to the remote repository. It might not save you hours, but it’ll remove some of the initial friction when starting a new project.

With that in mind, let’s get started.

The Application Dependencies

One thing is for certain: in terms of appearance, the console will never have the sophistication of a graphical user interface. Nevertheless, that doesn’t mean it has to be plain, ugly, monochrome text. You might be surprised by just how much you can do visually, while at the same time keeping it functional. We’ll be looking at a couple of libraries for enhancing the display: chalk for colorizing the output and clui to add some additional visual components. Just for fun, we’ll use figlet to create a fancy ASCII-based banner and we’ll also use clear to clear the console.

In terms of input and output, the low-level Readline Node.js module could be used to prompt the user and request input, and in simple cases is more than adequate. But we’re going to take advantage of a third-party package which adds a greater degree of sophistication — Inquirer. As well as providing a mechanism for asking questions, it also implements simple input controls: think radio buttons and checkboxes, but in the console.

We’ll also be using minimist to parse command-line arguments.

Here’s a complete list of the packages we’ll use specifically for developing on the command line:

chalk — colorizes the output clear — clears the terminal screen clui — draws command-line tables, gauges and spinners figlet — creates ASCII art from text inquirer — creates interactive command-line user interface minimist — parses argument options configstore — easily loads and saves config without you having to think about where and how.

Additionally, we’ll also be using the following:

@octokit/rest — a GitHub REST API client for Node.js lodash — a JavaScript utility library simple-git — a tool for running Git commands in a Node.js application touch — a tool for implementating the Unix touch command. Getting Started

Although we’re going to create the application from scratch, don’t forget that you can also grab a copy of the code from the repository which accompanies this article.

Create a new directory for the project. You don’t have to call it ginit, of course:

mkdir ginit cd ginit

Create a new package.json file:

npm init

Follow the simple wizard, for example:

name: (ginit) version: (1.0.0) description: "git init" on steroids entry point: (index.js) test command: git repository: keywords: Git CLI author: [YOUR NAME] license: (ISC)

Now install the dependencies:

npm install chalk clear clui figlet inquirer minimist configstore @octokit/rest lodash simple-git touch --save

Alternatively, simply copy-paste the following package.json file — modifying the author appropriately — or grab it from the repository which accompanies this article:

{ "name": "ginit", "version": "1.0.0", "description": "\"git init\" on steroids", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [ "Git", "CLI" ], "author": "Lukas White <hello@lukaswhite.com>", "license": "ISC", "bin": { "ginit": "./index.js" }, "dependencies": { "@octokit/rest": "^14.0.5", "chalk": "^2.3.0", "clear": "0.0.1", "clui": "^0.3.6", "configstore": "^3.1.1", "figlet": "^1.2.0", "inquirer": "^5.0.1", "lodash": "^4.17.4", "minimist": "^1.2.0", "simple-git": "^1.89.0", "touch": "^3.1.0" } }

Now create an index.js file in the same folder and require the following dependencies:

const chalk = require('chalk'); const clear = require('clear'); const figlet = require('figlet'); Adding Some Helper Methods

We’re going to create a lib folder where we’ll split our helper code into modules:

files.js — basic file management inquirer.js — command-line user interaction github.js — access token management repo.js — Git repository management.

Let’s start with lib/files.js. Here, we need to:

get the current directory (to get a default repo name) check whether a directory exists (to determine whether the current folder is already a Git repository by looking for a folder named .git).

This sounds straightforward, but there are a couple of gotchas to take into consideration.

Firstly, you might be tempted to use the fs module’s realpathSync method to get the current directory:

path.basename(path.dirname(fs.realpathSync(__filename)));

This will work when we’re calling the application from the same directory (e.g. using node index.js), but bear in mind that we’re going to be making our console application available globally. This means we’ll want the name of the directory we’re working in, not the directory where the application resides. For this purpose, it’s better to use process.cwd:

path.basename(process.cwd());

Secondly, the preferred method of checking whether a file or directory exists keeps changing.The current way is to use fs.stat or fs.statSync. These throw an error if there’s no file, so we need to use a try … catch block.

Finally, it’s worth noting that when you’re writing a command-line application, using the synchronous version of these sorts of methods is just fine.

Putting that all together, let’s create a utility package in lib/files.js:

const fs = require('fs'); const path = require('path'); module.exports = { getCurrentDirectoryBase : () => { return path.basename(process.cwd()); }, directoryExists : (filePath) => { try { return fs.statSync(filePath).isDirectory(); } catch (err) { return false; } } };

Go back to index.js and ensure you require the new file:

const files = require('./lib/files');

With this in place, we can start developing the application.

Initializing the Node CLI

Now let’s implement the start-up phase of our console application.

In order to demonstrate some of the packages we’ve installed to enhance the console output, let’s clear the screen and then display a banner:

clear(); console.log( chalk.yellow( figlet.textSync('Ginit', { horizontalLayout: 'full' }) ) );

The output from this is shown below.

The welcome banner on our Node CLI, created using Chalk and Figlet

Next up, let’s run a simple check to ensure that the current folder isn’t already a Git repository. That’s easy: we just check for the existence of a .git folder using the utility method we just created:

if (files.directoryExists('.git')) { console.log(chalk.red('Already a git repository!')); process.exit(); }

Tip: notice we’re using the chalk module to show a red-colored message.

The post Build a JavaScript Command Line Interface (CLI) with Node.js appeared first on SitePoint.

6 Popular Portfolio Builders for Designers

Oct 16, 2019

Description:

Popular Portfolio Builders for Designers

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Doing a great job of showcasing your work doesn’t have to be difficult. You don’t have to print out a batch of fancy brochures to distribute or carry around a folder containing a sheaf of papers. You’ll get far better results by using a website builder to create your own personal online portfolio for the whole world to see.

Building a portfolio website is relatively easy if you have the right tool for the job. If you can envision an awesome, engaging portfolio, you can build it. Especially with any of the six portfolio website tools described in this article. Since all six are top-of-the-line tools, there’s no reason to settle for anything else.

Best of all, these website building tools are either free or offer free trials.

1. Portfoliobox

A screenshot of the Portfoliobox website

Portfoliobox was designed with photographers, artists, web designers, and other creative types in mind. Small business owners and entrepreneurs will find it attractive as well. This portfolio website builder is super easy to use, and since it’s not theme-based, it’s extremely flexible as well. Another plus is you don’t have to worry about coding.

As its name implies, this is a perfect tool for creating portfolio websites that range in style and looks from the epitome of professionalism to flat-out awesome. You can display your work or your products to the world, and by doing so you’ll hopefully earn a bushel of dollars, euros, or whatever, as well.

We suggest you try the free plan. In that way, you can get acquainted with Portfoliobox while having the tools at hand to create a medium-sized portfolio. If you have large galleries of images in mind, you may eventually want to upgrade to the pro plan. If you’re a student, opening a student account may be your best move.

Portfoliobox 4 is currently in the works and coming soon. Features include increased flexibility and functionality and a more intuitive interface. Portfoliobox has more than one million users.

2. Wix

Start creating your portfolio with Wix

Wix is a versatile and powerful website building tool you can use with great effect to promote your business or build an online shop. Where Wix really shines, however, is in the role of a portfolio website builder. Everything is drag and drop, supported by the necessary tools and features to customize any of the 500+ designer-made templates you choose to work with.

If you can visualize an online portfolio that’s truly stunning and a cut above the rest, you can build it — without coding. Rather than being restricted to trying to cleverly present a series of static images, you can use scroll effects, animations, video backgrounds and more to bring your portfolio to life and keep viewers engaged and encouraged to spread the word.

If you want total freedom to create your crowd-pleasing portfolio website, Wix is for you.

Weebly

Weebly screenshot

We said up front that you shouldn’t have to settle for less than the best, and that certainly applies to the Weebly portfolio website builder. What you design and build is limited only by your imagination, and if your technical expertise is somewhat challenged and you lack coding experience it doesn’t matter one bit. Everything you need is at your fingertips.

If “free” appeals to you, that’s just one more reason to go with Weebly. The website builder is free, hosting is free, and there’s even a mobile app you can use to manage your portfolio website and track its performance — from anywhere.

You can either purchase a domain from Weebly or use your own. If you need professional photos for your portfolio, Weebly can provide them at an affordable price.

4. Mobirise Website Builder

Mobirise screenshot

Since Mobirise is an offline website builder, you can download it and get started building an awesome portfolio website right away. No coding is necessary. Google AMP and Bootstrap 4 guarantees your website will be lightning-fast and 100% mobile friendly.

You’ll have plenty of trendy themes and templates to work with. Best of all, Mobirise is free for both personal and commercial uses — making it a very attractive option.

The post 6 Popular Portfolio Builders for Designers appeared first on SitePoint.

Go off Grid: Offline Reader for SitePoint Premium Now in Beta

Oct 15, 2019

Description:

We've done a massive amount of work on the SitePoint Premium experience this year, but users have been very clear about what they want to see next.

Our most requested feature is offline access to books in the SitePoint Premium library, and today, it's here.

We've been working on this for a long time and we're very excited to release what we think is a great way to read these books offline. But we hope you'll bear in mind that this is the first beta release of offline access, and we expect that there will be issues.

We're releasing this as an MVP to our Premium users so that we can iterate on it based on your feedback. This solution will allow you to read our content offline on any device, without having to download a specialized app.

You can now access this feature in the reader in SitePoint Premium. You will need to use a modern browser, as we're running service workers and indexDB to enable this feature.

Downloading a book is a two-stage process:

Click the download toggle as shown in the screenshot below, which will save the book to be accessed offline. You will then need to save the page, either natively in the browser or via a bookmark to be able to access the book while offline.

Please try it out and give us your feedback. There's a dedicated thread for feedback over on the SitePoint Community, which you can access with your existing SitePoint Premium account.

Keep your eye on this feature. We're working to release a new version soon, which will make it easier to see which titles you have downloaded for offline access.

Head to the library and test the offline reader Head over to our feedback thread

The post Go off Grid: Offline Reader for SitePoint Premium Now in Beta appeared first on SitePoint.

Getting Started with GraphQL and React Native

Oct 9, 2019

Description:

In 2012, Facebook engineer Nick Schrock started work on a small prototype to facilitate moving away from an old, unsupported partner API that powered the current Facebook News Feed. At the time, this was called “SuperGraph”. Fast forward to today and SuperGraph has helped shape the open-source query language GraphQL, which has been much of the buzzword in recent times.

Facebook describes GraphQL as a “query language for APIs and a runtime for fulfilling those queries with your existing data”. Put simply, GraphQL is an alternative to REST that has been steadily gaining popularity since its release. Whereas with REST a developer would usually collate data from a series of endpoint requests, GraphQL allows the developer to send a single query to the server that describes the exact data requirement.

Prerequisites

For this tutorial, you’ll need a basic knowledge of React Native and some familiarity with the Expo environment. You’ll also need the Expo client installed on your mobile device or a compatible simulator installed on your computer. Instructions on how to do this can be found here.

Project Overview

In this tutorial, we’re going to demostrate the power of GraphQL in a React Native setting by creating a simple coffee bean comparison app. So that you can focus on all of the great things GraphQL has to offer, I’ve put together the base template for the application using Expo.

A mockup of our coffee comparison app

To get started, you can clone this repo and navigate to the “getting-started” branch, which includes all of our basic views to start adding our GraphQL data to, as well as all of our initial dependencies, which at this stage are:

{ "expo": "^32.0.0", "react": "16.5.0", "react-native": "https://github.com/expo/react-native/archive/sdk-32.0.0.tar.gz", "react-navigation": "^3.6.1" }

To clone this branch, you’ll need to open up terminal and run this command:

git clone https://github.com/jamiemaison/graphql-coffee-comparison.git

To then navigate to the getting-started branch, you move into the newly cloned repo with cd graphql-coffee-comparison and run git checkout getting-started.

The next stage is to install our dependencies. To do this, make sure you’re on Node v11.10.1 and run npm install in the root directory of the project. This will add all of the dependencies listed above to your node_modules folder.

To start adding GraphQL to our React Native app, we’re going to need to install a few more dependencies that help us perform a few simple GraphQL functions. As is common with modern JavaScript development, you don’t need all of these dependencies to complete the data request, but they certainly help in giving the developer a better chance of structuring some clean, easy-to-read code. The dependencies you’ll need can be installed by running npm install --save apollo-boost react-apollo graphql-tag graphql.

Here’s an overview of what these dependencies are:

apollo-boost: a zero-configuration way of getting started with GraphQL in React/React Native react-apollo: this provides an integration between GraphQL and the Apollo client graphql-tag: a template literal tag that parses GraphQL queries graphql: the JavaScript reference implementation for GraphQL

Once all of the necessary dependencies have finished installing, run npm start. You should now see your familiar Expo window, and if you launch the app (either via a simulator or on a device) then you should see a screen similar to this:

A mockup of our getting started page

In basic terms, this application has two screens that are managed by react-navigation, Home.js and CoffeePage.js. The Home screen contains a simple FlatList that renders all of the coffee beans supplied to its data field. When clicked on, the user is navigated to the CoffeePage for that item, which displays more information about the product. It’s our job to now populate these views with interesting data from GraphQL.

The complete coffee page

Apollo Server Playground

There are two main elements to any successful GraphQL transaction: the server holding the data, and the front-end query making the request. For the purposes of this tutorial, we aren’t going to start delving into the wonderful world of server-side code, so I’ve created our server for us ready to go. All you need to do is navigate to yq42lj36m9.sse.codesandbox.io in your favorite browser and leave it running throughout the course of development. For those interested, the server itself is running using apollo-server and contains just enough code to hold the data we need and serve it upon receiving an appropriate query. For further reading, you can head over to apollographql.com to read more about apollo-server.

GraphQL Query Basics

Before we get into writing the actual code that’s going to request the data we need for our coffee bean comparison app, we should understand just how GraphQL queries work. If you already know how queries work or just want to get started with coding, you can skip ahead to the next section.

Note: these queries won’t work with our codesandbox server, but feel free to create your own at codesandbox.io if you’d like to test out the queries.

At its simplest level, we can use a flat structure for our queries when we know the shape of the data we’re requesting:

QUERY: RESPONSE: { { coffee { "coffee": { blend "blend": "rich" } } } }

On the left, we see the GraphQL query requesting the blend field from coffee. This works well when we know exactly what our data structure is, but what about when things are less transparent? In this example, blend returns us a string, but queries can be used to request objects as well:

QUERY: RESPONSE: { { coffee { "coffee": { beans { "beans": [ blend { } blend: "rich" } }, } { blend: "smooth" } ] } }

Here you can see we are simply requesting the beans object, with only the field blend being returned from that object. Each object in the beans array may very well contain other data other than blend, but GraphQL queries help us request only the data we need, cutting out any extra information that’s not necessary for our application.

So what about when we need to be more specific than this? GraphQL provides the capability for many things, but something that allows for extremely powerful data requests is the ability to pass arguments in your query. Take the following example:

QUERY: RESPONSE: { { coffee(companyId: "2") { "coffee": { beans { "beans": [ blend { } blend: "rich" } }, } { blend: "smooth" } ] } }

What we see is that we can pass an argument — in this case, the companyId — which ensures that we are only returned beans from one particular company. With REST, you can pass a single set of arguments via query params and URL segments, but with GraphQL querying every single field, it can get its own set of arguments. This allows GraphQL to be a dynamic solution for making multiple API fetches per request.

The post Getting Started with GraphQL and React Native appeared first on SitePoint.

macOS Catalina: 5 Things Web Developers & Designers Should Know

Oct 8, 2019

Description:

macOS Catalina is here and available for download, and you've no doubt heard all about the breakup of iTunes and the new consumer-oriented entertainment apps shipping with the system.

But what do developers, designers, and other tech professionals need to know? We run through the key points.

32-bit Support Ends with Catalina

Are you relying on some older, obscure native app for a specific function, as so many developers and designers do? Your Catalina update could throw you a wildcard: it's the first macOS release that drops support for 32-bit apps.

During the setup process, you'll be given a list of installed apps that will no longer open after the update. If you want to keep using that tool, it's time to hit up the developer for a long-overdue update... or stay on Mojave for a while longer yet.

A Cross-Platform Catalyst

Mojave brought iOS ports of the News, Stocks, Voice Memos and Home apps to macOS. In Catalina, Apple is opening the tools that enabled these ports up to developers under the name of Catalyst.

While this doesn't directly affect web development work, it does make iOS a more attractive native development platform, which may inform your future platform choices. And if Apple's plan to reinvigorate stale macOS third-party app development with some of the action from iOS works, you could incorporate better productivity and development apps into your workflow in the near future.

For now, Catalyst is available to developers of iPad apps — we expect that to broaden in the future.

Voice Control

Catalina offers accessibility improvements in the form of improved Voice Control for those who have difficulty seeing, or using keyboards and mice.

Of course, developers should ensure that their apps work as well as they can with this tool, because it's the right thing to do.

Developers are known for their love of keyboard shortcut mastery, but no doubt the ability to create custom commands has inspired determined lifehackers. What if you never had to take your cursor or eyes off of VS Code to run other frequent workflows?

We look forward to seeing what the community comes up with.

Screen Time

Do you waste too much time using your computer for mindless entertainment, forcing you to stay up late making up the time productively?

Or are you a workaholic who just can't find the will to shut off and disconnect?

If you're like most of us in the industry, you're a mix of the two. Catalina introduces a variant of the Screen Time app that's been on iOS for a couple of years now.

Screen Time for macOS provides you with visual analytics that help you understand the way you're spending time on your device, which can often lead to some unexpected epiphanies. It also lets you schedule downtime, forcing you off the computer and into the real world at the right time.

As with iOS, you can also set time limits for specific apps, and there are some ways to moderate your web content usage without outright blocking your web browser from opening.

Sidecar: The Most Expensive Secondary Display You'll Ever Own

For developers, designers, and all other web professionals, the real headline feature of Catalina is Sidecar. Sidecar turns your iPad into a secondary display for your Mac, and it's really easy to enable (provided you have the requisite tablet, which is not included with the operating system update).

The best reason to use Sidecar over a standard display is Apple Pencil integration. Designers will love the ability to draw directly on the screen when using Sketch and Illustrator without switching devices all the time. You can even mirror your Mac's screen if you'd like an unobstructed view of what you're sketching on one side.

Most of us will use Sidecar as a place to dump Slack or a terminal window, but in any case, it's clear it'll be the most beneficial update for many of us.

How'd You Go?

Let us know how you went with the upgrade, and what you've enjoyed most so far. We always recommend waiting a few days for the bugs to shake out — especially with Apple's recent track record — but initial reports suggest the release version is pretty solid after all.

The post macOS Catalina: 5 Things Web Developers & Designers Should Know appeared first on SitePoint.

9 of the Best Animation Libraries for UI Designers

Oct 8, 2019

Description:

This is the latest update to our guide to helping you choose the right animation library for each task. We're going to run-through 9 free, well-coded animation libraries best-suited to UI design work — covering their strengths and weaknesses, and when to choose each one.

Take your CSS animations to the next level with our Animating with CSS course by Donovan Hutchinson, the man behind CSS Animation Rocks.

Front-end web design has been through a revolution in the last decade. In the late naughties, most of us were still designing static magazine layouts. Nowadays, we're building “digital machines” with thousands of resizing, coordinated, moving parts.

Quite simply, great UI designers need to be great animators too — with a solid working understanding of web animation techniques.

Keep in mind that we're looking at each library from the perspective of a code-savvy UI designer, not as a “code guru” developer. Some of these libraries are pure CSS. Others are JavaScript, but none require anything more than basic HTML/CSS understanding to be useful. Link the library; add a CSS class.

Quite simply, great UI designers need to be great animators with a rock-solid understanding of the underlying tech.

This is the latest update to our guide to helping you choose the right animation library for each task. We're going to run-through 9 free, well-coded animation libraries best-suited to UI design work – their strengths and weaknesses and when to choose each one.

Some are pure CSS. Others are JavaScript, but none require anything more than basic HTML/CSS understanding to be used.

Enjoy.

The 2017 Top 9 Animation Libraries List Animate.css Bounce.js AnimeJS Magic Animations DynCSS CSShake Hover.CSS Velocity.js AniJS Animate.css

Animate.css is one of the smallest and most easy-to-use CSS animation libraries available. Applying the Animate library to your project is as simple as adding the required CSS classes to your HTML elements. You can also use jQuery to call the animations on a particular event.

Animate.css

Creators: Daniel Eden Released: 2013 Current Version: 3.5.2 Most Recent Update: April 2017 Popularity: 41,000+ stars on GitHub Description: "A cross-browser library of CSS animations. As easy to use as an easy thing." Library Size: 43 kB GitHub: https://github.com/daneden/animate.css License: MIT

As of mid-2017, it still one of the most popular and widely-used CSS animation libraries and its minified file is small enough (16.6kb) for inclusion in mobile websites as well. It has 41,000 stars on Github and is used as a component in many larger projects.

Animate.css is still under active development after 4 years. We feel that this is one of the simplest and most robust animation libraries and would definitely recommend you to use this in your next project.

Bounce.js

Bounce.js is a tool and javascript library that focusses on providing a selection of unique bouncy CSS animations to your website.

Bounce.js

This project is open-source with its code on GitHub.

Creators: Tictail Released: 2014 Current Version: 0.8.2 Most Recent Update: Feb 2015 Popularity: 4,967+ stars on GitHub Description: "Create beautiful CSS3 powered animations in no time." Library Size: 16 kB GitHub: https://github.com/tictail/bounce.js License: MIT

Bounce.js is a neat animation library shipped with about ten animation 'pre-sets' – hence the small size of the library. As with animate.css, the animations are smooth and flawless. You might want to consider using this library if your needs focus on 'pop and bubble' animation types and could benefit from a lower file size overhead.

AnimeJS

AnimeJS is described as a lightweight JavaScript animation library that 'works with any CSS Properties, individual CSS transforms, SVG or any DOM attributes, and JavaScript Objects'. It's pretty awesome – so awesome in fact, that the GIF capture I took below can't do justice to how smooth and buttery the motion is.

Bounce.js

This project is available on GitHub.

Creator: Julian Garnier Released: 2016 Current Version: 2.0.2 Most Recent Update: March 2017 Popularity: 12,222+ stars on GitHub Description: "JavaScript Animation Engine." Library Size: 10.9kB GitHub: https://github.com/juliangarnier/anime License: MIT

AnimeJS is only newcomer to our list but has won a lot of converts in the 12 months since it's creation. It's incredibly versatile and powerful and wouldn't be out of place being used within HTML games. The only real question is 'is it overkill for simple web apps'?

Maybe, but as its fast, small and relatively easy to learn, it's hard to find fault with it.

Magic Animations

Magic Animations has been one impressive animation libraries available. It has many different animations, many of which are quite unique to this library. As with Animate.css, you can implement Magic by simply importing the CSS file. You can also make use of the animations from jQuery. This project offers a particularly cool demo application.

Magic Animations

The post 9 of the Best Animation Libraries for UI Designers appeared first on SitePoint.

Create a Cron Job on AWS Lambda

Oct 3, 2019

Description:

Create a Cron Job on AWS Lambda

Cron jobs are really useful tools in any Linux or Unix-like operating systems. They allow us to schedule scripts to be executed periodically. Their flexibility makes them ideal for repetitive tasks like backups and system cleaning, but also data fetching and data processing.

For all the good things they offer, cron jobs also have some downsides. The main one is that you need a dedicated server or a computer that runs pretty much 24/7. Most of us don't have that luxury. For those of us who don't have access to a machine like that, AWS Lambda is the perfect solution.

AWS Lambda is an event-driven, serverless computing platform that's a part of the Amazon Web Services. It’s a computing service that runs code in response to events and automatically manages the computing resources required by that code. Not only is it available to run our jobs 24/7, but it also automatically allocates the resources needed for them.

Setting up a Lambda in AWS involves more than just implementing a couple of functions and hoping they run periodically. To get them up and running, several services need to be configured first and need to work together. In this tutorial, we'll first go through all the services we'll need to set up, and then we'll implement a cron job that will fetch some updated cryptocurrency prices.

Understanding the Basics

As we said earlier, some AWS services need to work together in order for our Lambda function to work as a cron job. Let's have a look at each one of them and understand their role in the infrastructure.

S3 Bucket

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services' (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata. — TechTarget

Every Lambda function needs to be prepared as a “deployment package”. The deployment package is a .zip file consisting of the code and any dependencies that code might need. That .zip file can then be uploaded via the web console or located in an S3 bucket.

IAM Role

An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. — Amazon

We’ll need to manage permissions for our Lambda function with IAM. At the very least it should be able to write logs, so it needs access to CloudWatch Logs. This is the bare minimum and we might need other permissions for our Lambda function. For more information, the AWS Lambda permissions page has all the information needed.

CloudWatch Events Rule

CloudWatch Events support cron-like expressions, which we can use to define how often an event is created. We'll also need to make sure that we add our Lambda function as a target for those events.

Lambda Permission

Creating the events and targeting the Lambda function isn’t enough. We'll also need to make sure that the events are allowed to invoke our Lambda function. Anything that wants to invoke a Lambda function needs to have explicit permission to do that.

These are the building blocks of our AWS Lambda cron job. Now that we have an idea of all the moving parts of our job, let's see how we can implement it on AWS.

Implementing a Cron Job on AWS

A lot of the interactions we described earlier are taken care of by Amazon automatically. In a nutshell, all we need to do is to implement our service (the actual lambda function) and add rules to it (how often and how the lambda will be executed). Both permissions and roles are taken care of by Amazon; the defaults provided by Amazon are the ones we'll be using.

Lambda function

First, let's start by implementing a very simple lambda function. In the AWS dashboard, use the Find Services function to search by lambda. In the lambda console, select Create a function. At this point, we should be in Lambda > Functions > reate Function.

To get things going, let's start with a static log message. Our service will only be a print function. For this, we'll use Node.js 10x as our runtime language. Give it a function name, and on Execution Role let's stay with Create a new role with basic lambda permissions. This is a basic set of permissions on IAM that will allow us to upload logs to Amazon CloudWatch logs. Click Create Function.

Create a new lambda function

Our function is now created with an IAM Role. In the code box, substitute the default code with the following:

exports.handler = async (event) => { console.log("Hello Sitepoint Reader!"); return {}; };

To check if the code is executing correctly, we can use the Test function. After giving a name to our test, it will execute the code and show its output in the Execution Result field just below our code.

If we test the code above we can see that we have no response, but in the function logs, we can see we have our message printed. This indicates that our service is running correctly so we can proceed with our cron implementation.

The post Create a Cron Job on AWS Lambda appeared first on SitePoint.

Cloning Tinder Using React Native Elements and Expo

Oct 1, 2019

Description:

Cloning Tinder Using React Native Elements and Expo

Making pixel-perfect layouts on mobile is hard. Even though React Native makes it easier than its native counterparts, it still requires a lot of work to get a mobile app to perfection.

In this tutorial, we’ll be cloning the most famous dating app, Tinder. We’ll then learn about a UI framework called React Native Elements, which makes styling React Native apps easy.

Since this is just going to be a layout tutorial, we’ll be using Expo, as it makes setting things up much easier than plain old react-native-cli. We’ll also be making use of a lot of dummy data to make our app.

We’ll be making a total of four screens—Home, Top Picks, Profile, and Messages.

Prerequisites

For this tutorial, you need a basic knowledge of React Native and some familiarity with Expo. You’ll also need the Expo client installed on your mobile device or a compatible simulator installed on your computer. Instructions on how to do this can be found here.

You also need to have a basic knowledge of styles in React Native. Styles in React Native are basically an abstraction similar to that of CSS, with just a few differences. You can get a list of all the properties in the styling cheatsheet.

Throughout the course of this tutorial we’ll be using yarn. If you don’t have yarn already installed, install it from here.

Also make sure you’ve already installed expo-cli on your computer.

If it’s not installed already, then go ahead and install it:

$ yarn global add expo-cli

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 11.14.0 npm 6.4.1 yarn 1.15.2 expo 2.16.1

Make sure to update expo-cli if you haven’t updated in a while, since expo releases are quickly out of date.

We’re going to build something that looks like this:

Tinder Demo in Expo

If you just want to clone the repo, the whole code can be found on GitHub.

Getting Started

Let’s set up a new Expo project using expo-cli:

$ expo init expo-tinder

It will then ask you to choose a template. You should choose tabs and hit Enter.

Expo Init - Choose A Template

Then it will ask you to name the project. Type expo-tinder and hit Enter again.

Expo Init - Name the Project

Lastly, it will ask you to press y to install dependencies with yarn or n to install dependencies with npm. Press y.

Expo Init - Install the dependencies

This bootstraps a brand new React Native app using expo-cli.

React Native Elements

React Native Elements is a cross-platform UI Toolkit for React Native with consistent design across Android, iOS and Web.

It’s easy to use and completely built with JavaScript. It’s also the first UI kit ever made for React Native.

It allows us to fully customize styles of any of our components the way we want so every app has its own unique look and feel.

It’s also open source and backed by a community of awesome developers.

You can build beautiful applications easily.

React Native Elements Demo

Cloning Tinder UI

We’ve already created a project named expo-tinder.

To run the project, type this:

$ yarn start

Press i to run the iOS Simulator. This will automatically run the iOS Simulator even if it’s not opened.

Press a to run the Android Emulator. Note that the emulator must be installed and started already before typing a. Otherwise it will throw an error in the terminal.

It should look like this:

Expo Tabs App

The post Cloning Tinder Using React Native Elements and Expo appeared first on SitePoint.

How to Build a News App with Svelte

Sep 27, 2019

Description:

How to Build a News App with Svelte

Svelte is a new JavaScript UI library that's similar in many ways to modern UI libraries like React. One important difference is that it doesn't use the concept of a virtual DOM.

In this tutorial, we'll be introducing Svelte by building a news application inspired by the Daily Planet, a fictional newspaper from the Superman world.

About Svelte

Svelte makes use of a new approach to building users interfaces. Instead of doing the necessary work in the browser, Svelte shifts that work to a compile-time phase that happens on the development machine when you're building your app.

In a nutshell, this is how Svelte works (as stated in the official blog):

Svelte runs at build time, converting your components into highly efficient imperative code that surgically updates the DOM. As a result, you're able to write ambitious applications with excellent performance characteristics.

Svelte is faster than the most powerful frameworks (React, Vue and Angular) because it doesn't use a virtual DOM and surgically updates only the parts that change.

We'll be learning about the basic concepts like Svelte components and how to fetch and iterate over arrays of data. We'll also learn how to initialize a Svelte project, run a local development server and build the final bundle.

Prerequisites

You need to have a few prerequisites, so you can follow this tutorial comfortably, such as:

Familiarity with HTML, CSS, and JavaScript (ES6+), Node.js and npm installed on your development machine.

Node.js can be easily installed from the official website or you can also use NVM for easily installing and managing multiple versions of Node in your system.

We'll be using a JSON API as a source of the news for our app, so you need to get an API key by simply creating an account for free and taking note of your API key.

Getting Started

Now, let's start building our Daily Planet news application by using the degit tool for generating Svelte projects.

You can either install degit globally on your system or use the npx tool to execute it from npm. Open a new terminal and run the following command:

npx degit sveltejs/template dailyplanetnews

Next, navigate inside your project's folder and run the development server using the following commands:

cd dailyplanetnews npm run dev

Your dev server will be listening from the http://localhost:5000 address. If you do any changes, they'll be rebuilt and live-reloaded into your running app.

Open the main.js file of your project, and you should find the following code:

import App from './App.svelte'; const app = new App({ target: document.body, props: { name: 'world' } }); export default app;

This is where the Svelte app is bootstrapped by creating and exporting an instance of the root component, conventionally called App. The component takes an object with a target and props attributes.

The target contains the DOM element where the component will be mounted, and props contains the properties that we want to pass to the App component. In this case, it's just a name with the world value.

Open the App.svelte file, and you should find the following code:

<script> export let name; </script> <style> h1 { color: purple; } </style> <h1>Hello {name}!</h1>

This is the root component of our application. All the other components will be children of App.

Components in Svelte use the .svelte extension for source files, which contain all the JavaScript, styles and markup for a component.

The export let name; syntax creates a component prop called name. We use variable interpolation—{...}—to display the value passed via the name prop.

You can simply use plain old JavaScript, CSS, and HTML that you are familiar with to create Svelte components. Svelte also adds some template syntax to HTML for variable interpolation and looping through lists of data, etc.

Since this is a small app, we can simply implement the required functionality in the App component.

In the <script> tag, import the onMount() method from "svelte" and define the API_KEY, articles, and URL variables which will hold the news API key, the fetched news articles and the endpoint that provides data:

<script> export let name; import { onMount } from "svelte"; const API_KEY = "<YOUR_API_KEY_HERE>"; const URL = `https://newsapi.org/v2/everything?q=comics&sortBy=publishedAt&apiKey=${API_KEY}`; let articles = []; </script>

onMount is a lifecycle method. Here’s what the official tutorial says about that:

Every component has a lifecycle that starts when it is created and ends when it is destroyed. There are a handful of functions that allow you to run code at key moments during that lifecycle. The one you'll use most frequently is onMount, which runs after the component is first rendered to the DOM.

Next, let's use the fetch API to fetch data from the news endpoint and store the articles in the articles variable when the component is mounted in the DOM:

<script> // [...] onMount(async function() { const response = await fetch(URL); const json = await response.json(); articles = json["articles"]; }); </script>

Since the fetch() method returns a JavaScript Promise, we can use the async/await syntax to make the code look synchronous and eliminate callbacks.

The post How to Build a News App with Svelte appeared first on SitePoint.

Real-time Location Tracking with React Native and PubNub

Sep 25, 2019

Description:

Building a Real-time Location Tracking App with React Native and PubNub

With ever-increasing usage of mobile apps, geolocation and tracking functionality can be found in a majority of apps. Real-time geolocation tracking plays an important role in many on-demand services, such as these:

taxi services like Uber, Lyft or Ola food Delivery services like Uber Eats, Foodpanda or Zomato monitoring fleets of drones

In this guide, we’re going use React Native to create a real-time location tracking apps. We’ll build two React Native apps. One will act as a tracking app (called “Tracking app”) and the other will be the one that’s tracked (“Trackee app”).

Here’s what the final output for this tutorial will look like:

[video width="640" height="480" mp4="https://dab1nmslvvntp.cloudfront.net/wp-content/uploads/2019/09/1569381508tracking.mp4"][/video]

Prerequisites

This tutorial requires a basic knowledge of React Native. To set up your development machine, follow the official guide here.

Apart from React Native, we’ll also be using PubNub, a third-party service that provides real-time data transfer and updates. We’ll use this service to update the user coordinates in real time.

Register for a free PubNub account here.

Since we’ll be using Google Maps on Android, we’ll also need a Google Maps API key, which you can obtain on the Google Maps Get API key page.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node v10.15.0 npm 6.4.1 yarn 1.16.0 react-native 0.59.9 react-native-maps 0.24.2 pubnub-react 1.2.0 Getting Started

If you want to have a look at the source code of our Tracker and Trackee apps right away, here are their GitHub links:

Trackee App repo Tracker App repo

Let’s start with the Trackee app first.

Trackee App

To create a new project using react-native-cli, type this in the terminal:

$ react-native init trackeeApp $ cd trackeeApp

Now let’s get to the fun part — the coding.

Add React Native Maps

Since we’ll be using Maps in our app, we’ll need a library for this. We’ll use react-native-maps.

Install react-native-maps by following the installation instructions here.

Add PubNub

Apart from maps, we’ll also install the PubNub React SDK to transfer our data in real time:

$ yarn add pubnub-react

After that, you can now run the app:

$ react-native run-ios $ react-native run-android

You should see something like this on your simulator/emulator:

Trackee App

The post Real-time Location Tracking with React Native and PubNub appeared first on SitePoint.

How to Build Your First Telegram Chatbot with Node.js

Sep 18, 2019

Description:

So, this morning you woke up with the idea to develop a way to store and label interesting articles you've read. After playing with the idea, you figure a Telegram chatbot is the most convenient solution for this problem.

In this guide, we'll walk you through everything you need to know to build your first Telegram chatbot using JavaScript and Node.js.

To get started, we have to register our new bot with the so-called Botfather to receive our API access token.

Bot Registration with @BotFather

The first step towards our very own Telegram bot is registering the bot with the BotFather. The BotFather is a bot itself that makes your life much easier. It helps you with registering bots, changing the bot description, adding commands, and providing you with the API token for your bot.

The API token is the most important step, as this allows you to run the code that can perform tasks for the bot.

1. Finding the BotFather

The BotFather can be found on Telegram by searching for 'BotFather'. Click on the official BotFather, indicated with the white checkmark icon in the blue circle.

2. Registering a New Bot

Now we've found BotFather, let’s talk to him! You can start the conversation by typing /newbot. BotFather will ask you to choose a name for your both. This name can be anything and doesn’t have to be unique. To keep things simple, I named my bot ArticleBot.

Next, you will be prompted to input a username for the bot. The username must be unique and end in bot. Therefore, I chose michiel_article_bot, as that username was not yet taken. This will also be the username you use for looking up the bot in Telegram's search field.

FatherBot will return a success message with your token to access the Telegram HTTP API. Make sure to store this token safely, and certainly don't share it with anyone else.

3. Modifying the Bot

We can further modify the bot by adding a description or setting the commands we wish the bot to know. You can message the bot with the text /setcommands. It will show you how to input the commands with the format command1 - Description.

The post How to Build Your First Telegram Chatbot with Node.js appeared first on SitePoint.

How to Build Unique, Beautiful Websites with Tailwind CSS

Sep 13, 2019

Description:

Build Unique and Beautiful Web Sites with Tailwind CSS

When thinking about what CSS framework to use for a new project, options like Bootstrap and Foundation readily jump to mind. They’re tempting to use because of their ready-to-use, pre-designed components, which developers can use with ease right away. This approach works well with relatively simple websites with a common look and feel. But as soon as we start building more complex, unique sites with specific needs, a couple of problems arise.

At some point, we need to customize certain components, create new components, and make sure the final codebase is unified and easy to maintain after the changes.

It's hard to satisfy the above needs with frameworks like Bootstrap and Foundation, which give us a bunch of opinionated and, in many cases, unwanted styles. As a result, we have to continuously solve specificity issues while trying to override the default styles. It doesn't sound like a fun job, does it?

Ready-to-use solutions are easy to implement, but inflexible and confined to certain boundaries. On other hand, styling web sites without a CSS framework is powerful and flexible, but isn’t easy to manage and maintain. So, what’s the solution?

The solution, as always, is to follow the golden middle. We need to find and apply the right balance between the concrete and abstract. A low-level CSS framework offers such a balance. There are several frameworks of this kind, and in this tutorial, we'll explore the most popular one, Tailwind CSS.

What Is Tailwind?

Tailwind is more than a CSS framework, it's an engine for creating design systems. — Tailwind website

Tailwind is a collection of low-level utility classes. They can be used like lego bricks to build any kind of components. The collection covers the most important CSS properties, but it can be easily extended in a variety of ways. With Tailwind, customization isn’t pain in the neck anymore. The framework has great documentation, covering every class utility in detail and showing the ways it can be customized. All modern browsers, and IE11+, are supported.

Why Using Utility-first Framework?

A low-level, utility-first CSS framework like Tailwind has a plenty of benefits. Let's explore the most significant of them:

You have greater control over elements' appearance. We can change and fine-tune an element's appearance much more easily with utility classes. It's easy to manage and maintain in large projects, because you only maintain HTML files, instead of a large CSS codebase. It's easier to build unique, custom website designs without fighting with unwanted styles. It's highly customizable and extensible, which gives us unlimited flexibility. It has a mobile-first approach and easy implementation of responsive design patterns. There's the ability to extract common, repetitive patterns into custom, reusable components — in most cases without writing a single line of custom CSS. It has self-explanatory classes. We can imagine how the styled element looks only by reading the classes.

Finally, as Tailwind's creators say:

it's just about impossible to think this is a good idea the first time you see it — you have to actually try it.

So, let's try it!

Getting Started with Tailwind

To demonstrate Tailwind's customization features, we need to install it via npm:

npm install tailwindcss

The next step is to create a styles.css file, where we include the framework styles using the @tailwind directive:

@tailwind base; @tailwind components; @tailwind utilities;

After that, we run the npx tailwind init command, which creates a minimal tailwind.config.js file, where we'll put our customization options during the development. The generated file contains the following:

module.exports = { theme: {}, variants: {}, plugins: [], }

The next step is to build the styles in order to use them:

npx tailwind build styles.css -o output.css

Finally, we link the generated output.css file and Font Awesome in our HTML:

<link rel="stylesheet" type="text/css" href="output.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.9.0/css/all.min.css">

And now, we’re ready to start creating.

Building a One-page Website Template

In the rest of the tutorial, we'll build a one-page website template using the power and flexibility of Tailwind's utility classes.

Here you can see the template in action.

I'm not going to explain every single utility (which would be boring and tiresome) so I suggest you to use the Tailwind cheatsheet as a quick reference. It contains all available utilities with their effect, plus direct links to the documentation.

We'll build the template section by section. They are Header, Services, Projects, Team, and Footer.

We firstly wrap all section in a container:

<div class="container mx-auto"> <!-- Put the sections here --> </div> Header (Logo, Navigation)

The first section — Header — will contain a logo on the left side and navigation links on the right side. Here’s how it will look:

The site header

Now, let's explore the code behind it.

<div class="flex justify-between items-center py-4 bg-blue-900"> <div class="flex-shrink-0 ml-10 cursor-pointer"> <i class="fas fa-drafting-compass fa-2x text-orange-500"></i> <span class="ml-1 text-3xl text-blue-200 font-semibold">WebCraft</span> </div> <i class="fas fa-bars fa-2x visible md:invisible mr-10 md:mr-0 text-blue-200 cursor-pointer"></i> <ul class="hidden md:flex overflow-x-hidden mr-10 font-semibold"> <li class="mr-6 p-1 border-b-2 border-orange-500"> <a class="text-blue-200 cursor-default" href="#">Home</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Services</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Projects</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Team</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">About</a> </li> <li class="mr-6 p-1"> <a class="text-white hover:text-blue-300" href="#">Contacts</a> </li> </ul> </div>

As you can see, the classes are pretty self-explanatory as I mentioned above. We'll explore only the highlights.

First, we create a flex container and center its items horizontally and vertically. We also add some top and bottom padding, which Tailwind combines in a single py utility. As you may guess, there’s also a px variant for left and right. We'll see that this type of shorthand is broadly used in many of the other utilities. As a background color, we use the darkest blue (bg-blue-900) from Tailwind's color palette. The palette contains several colors with shades for each color distributed from 100 to 900. For example, blue-100, blue-200, blue-300, etc.

In Tailwind, we apply a color to a property by specifying the property followed by the color and the shade number. For example, text-white, bg-gray-800, border-red-500. Easy peasy.

For the logo on the left side, we use a div element, which we set not to shrink (flex-shrink-0) and move it a bit away from the edge by applying the margin-left property (ml-10). Next we use a Font Awesome icon whose classes perfectly blend with those of Tailwind. We use one of them to make the icon orange. For the textual part of the logo, we use big, light blue, semi-bolded text, with a small offset to the right.

In the middle, we add an icon that will be visible only on mobile. Here we use one of the responsive breakpoint prefixes (md). Tailwind, like Bootstrap and Foundation, follows the mobile-first approach. This means that when we use utilities without prefix (visible), they apply all the way from the smallest to the largest devices. If we want different styling for different devices, we need to use the breakpoint prefixes. So, in our case the icon will be visible on small devices, and invisible (md:invisible) on medium and beyond.

At the right side we put the nav links. We style the Home link differently, showing that it’s the active link. We also move the navigation from the edge and set it to be hidden on overflow (overflow-x-hidden). The navigation will be hidden (hidden) on mobile and set to flex (md:flex) on medium and beyond.

You can read more about responsiveness in the documentation.

Services

Let's now create the next section, Services. Here’s how it will look:

The Services section

And here’s the code:

<div class="w-full p-6 bg-blue-100"> <div class="w-48 mx-auto pt-6 border-b-2 border-orange-500 text-center text-2xl text-blue-700">OUR SERVICES</div> <div class="p-2 text-center text-lg text-gray-700">We offer the best web development solutions.</div> <div class="flex justify-center flex-wrap p-10"> <div class="relative w-48 h-64 m-5 bg-white shadow-lg"> <div class="flex items-center w-48 h-20 bg-orange-500"> <i class="fas fa-bezier-curve fa-3x mx-auto text-white"></i> </div> <p class="mx-2 py-2 border-b-2 text-center text-gray-700 font-semibold uppercase">UI Design</p> <p class="p-2 text-sm text-gray-700">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean ac est massa.</p> <div class="absolute right-0 bottom-0 w-8 h-8 bg-gray-300 hover:bg-orange-300 text-center cursor-pointer"> <i class="fas fa-chevron-right mt-2 text-orange-500"></i> </div> </div> ... </div> </div>

We create a section with light blue background. Then we add an underlined title and a subtitle.

Next, we use a flex container for the services items. We use flex-wrap so the items will wrap on resize. We set the dimensions for each card and add some space and a drop shadow. Each card has a colored section with a topic icon, a title, and a description. And we also put a button with an icon in the bottom-right corner.

Here we use one of the pseudo-class variants (hover, focus, etc.). They’re used in the same way as responsive breakpoints. We use the pseudo-class prefix, followed by a colon and the property name (hover:bg-orange-300).

You can learn more about pseudo-class variants in the documentation.

For brevity, I show the code only for the first card. The other ones are similar. You have to change only the colors, icons, and titles. See the final HTML file on GitHub repo for a reference.

The post How to Build Unique, Beautiful Websites with Tailwind CSS appeared first on SitePoint.

How to Automatically Optimize Responsive Images in Gatsby

Sep 11, 2019

Description:

Image optimization — at least in my experience — has always been a major pain when building speedy websites. Balancing image quality and bandwidth efficiency is a tough act without the right tools. Photo editing tools such as Photoshop are great for retouching, cropping and resizing bitmap images. Unfortunately, they are not that good at creating 100% optimized images for the web.

Luckily, we have extension packages for build tools that can optimize images for us quickly:

Gulp: gulp-imagemin Grunt: grunt-imagemin Webpack: imagemin-webpack Parcel: parcel-plugin-imagemin

Unfortunately, image optimization alone is not enough. You need to make sure that the entire website is responsive and looks great at all screen sizes. This can easily be done through CSS, but here lies the problem:

Should you optimize your image for large screens or small screens?

If the majority of your audience is using mobile devices to access your site, then the logical choice is to optimize images for small screens. However, it's likely that a significant source of revenue is coming from visitors with large screens over 17". You definitely wouldn't want to neglect them.

Luckily, we have technology that allows us to deliver optimized responsive images for different screen sizes. This means we need to generate multiple optimized images with different resolutions fit for specific screen sizes or responsive breakpoints.

For WordPress site owners, this kind of image optimization requires the use of a plugin and a third-party service. The creation of these responsive images cannot be done on the hosting server without significantly slowing down the site for users, hence the need for a third-party service.

If you are using Gatsby to run your website, then you are in luck. This feature is built-in and already configured for you to optimize your responsive images. You just need to drop in some images and write a bit of code to link up your responsive images with your web page. When you run the gatsby build command, the images are optimized for you. This saves you from requiring a third-party service to perform the optimization for you. It's simply done on your deployment machine.

In the subsequent sections, we are going to learn:

How image optimization works in Gatsby How to optimize images on a web page How to optimize images in a Markdown post Prerequisites

Before we start, I would like to note that this tutorial is for developers who are just starting with Gatsby and would like to learn specifically about how to handle images. I am going to assume you already have a good foundation in the following topics:

The post How to Automatically Optimize Responsive Images in Gatsby appeared first on SitePoint.

Create an Offline-first React Native App Using WatermelonDB

Sep 10, 2019

Description:

React Native has different database storage mechanisms for different mobile app purposes. Simple structures — such as user settings, app settings, and other key-value pair data — can be handled easily using async storage or secure storage.

Other applications — such as Twitter clones — fetch data from the server and directly show it to the user. They maintain a cache of data, and if a user needs to interact with any document, they call the APIs directly.

So not all the applications require a database.

When We Need a Database

Applications such as the Nozbe (a to-do app), Expense (a tracker), and SplitWise (for in-app purchases), need to work offline. And to do so, they need a way to store data locally and sync it up with the server. This type of application is called an offline first app. Over time, these apps collect a lot of data, and it becomes harder to manage that data directly — so a database is needed to manage it efficiently.

Options in React Native

When developing an app, choose the database that best fits your requirements. If two options are available, then go with the one that has better documentation and quicker response to issues. Below are some of the best known options available for React Native:

WatermelonDB: an open-source reactive database that can be used with any underlying database. By default, it uses SQLite as the underlying database in React Native. SQLite (React Native, Expo): the oldest, most used, battle-tested and well-known solution. It’s available for most of the platforms, so if you’ve developed an application in another mobile app development framework, you might already be familiar with it. Realm (React Native): an open-source solution, but it also has an enterprise edition with lots of other features. They have done a great job and many well-known companies use it. FireBase (React Native, Expo): a Google service specifically for the mobile development platform. It offers lots of functionality, storage being just one of them. But it does require you to stay within their ecosystem to utilize it. RxDB: a real-time database for the Web. It has good documentation, a good rating on GitHub (> 9K stars), and is also reactive. Prerequisites

I assume you have knowledge about basic React Native and its build process. We’re going to use react-native-cli to create our application.

I’d also suggest setting up an Android or iOS development environment while setting up the project, as you may face many issues, and the first step in debugging is keeping the IDE (Android Studio or Xcode) opened to see the logs.

Note: you can check out the official guide for installing dependencies here for more information. As the official guidelines are very concise and clear, we won’t be covering that topic here.

To set up a virtual device or physical device, follow these guides:

using a physical device using a virtual device

Note: there’s a more JavaScript-friendly toolchain named Expo. The React Native community has also started promoting it, but I haven’t come across a large-scale, production-ready application that uses Expo yet, and Expo port isn’t currently available for those using a database such as Realm — or in our case, WatermelonDB.

App Requirements

We’ll create a movie search application with a title, poster image, genre, and release date. Each movie will have many reviews.

The application will have three screens.

Home will show two buttons — one to generate dummy records, and a second to add new movie. Below it, there will be one search input that can be used to query movie titles from the database. It will show the list of movies below the search bar. If any name is searched, the list will only show the searched movies.

home screen view

Clicking on any movie will open a Movie Dashboard, from where all its reviews can be checked. A movie can be edited or deleted, or a new review can be added from this screen.

movie dashboard

The third screen will be Movie Form, which is used to create/update a movie.

movie form

The source code is available on GitHub.

Why We Chose WatermelonDB (features)

We need to create an offline-first application, so a database is a must.

Features of WatermelonDB

Let’s look at some of the features of WatermelonDB.

Fully observable
A great feature of WatermelonDB is its reactive nature. Any object can be observed using observables, and it will automatically rerender our components whenever the data changes. We don’t have to make any extra efforts to use WatermelonDB. We wrap the simple React components and enhance them to make them reactive. In my experience, it just works seamlessly, and we don’t have to care about anything else. We make the changes in the object and our job’s done! It’s persisted and updated at all the places in the application.

SQLite under the hood for React Native
In a modern browser, just-in-time compilation is used to improve speed, but it’s not available in mobile devices. Also, the hardware in mobile devices is slower than in computers. Due to all these factors, JavaScript apps run slower in a mobile application. To overcome this, WatermelonDB doesn’t fetch anything until it’s needed. It uses lazy loading and SQLite as an underlying database on a separate thread to provide a fast response.

Sync primitives and sync adapter
Although WatermelonDB is just a local database, it also provides sync primitives and sync adapters. It makes it pretty easy to use with any of our own back-end databases. We just need to conform to the WatermelonDB sync protocol on the back end and provide the endpoints.

Further features include:

Statically typed using Flow Available for all platforms Dev Env and WatermelonDB Setup (v0.0)

We’re going to use react-native-cli to create our application.

Note: you may be able to use it with ExpoKit or Ejecting from Expo.

If you want to skip this part then clone the source repo and checkout the v0.0 branch.

Start a new project:

react-native init MovieDirectory cd MovieDirectory

Install dependencies:

npm i @nozbe/watermelondb @nozbe/with-observables react-navigation react-native-gesture-handler react-native-fullwidth-image native-base rambdax

Below is the list of installed dependencies and their uses:

native-base: a UI library that will be used for look and feel of our app. react-native-fullwidth-image: for showing full-screen responsive images. (Sometimes it can be a pain to calculate the width, height and also maintain aspect ratio. So it’s better to use an existing community solution.) @nozbe/watermelondb: the database we’ll be using. @nozbe/with-observables: contains the decorators (@) that will be used in our models. react-navigation: used for Managing routes/screens react-native-gesture-handler: the dependency for react-navigation. rambdax: used to generate a random number while creating dummy data.

Open your package.json and replace the scripts with the following code:

"scripts": { "start": "node node_modules/react-native/local-cli/cli.js start", "start:ios": "react-native run-ios", "start:android": "react-native run-android", "test": "jest" }

This will be used to run our application in the respective device.

Set Up WatermelonDB

We need to add a Babel plugin to convert our decorators, so install it as a dev dependency:

npm install -D @babel/plugin-proposal-decorators

Create a new file .babelrc in the root of the project:

// .babelrc { "presets": ["module:metro-react-native-babel-preset"], "plugins": [["@babel/plugin-proposal-decorators", { "legacy": true }]] }

Now use the following guides for your target environment:

iOS Android

Open the android folder in Android Studio and sync the project. Otherwise, it will give you an error when running the application for the first time. Do the same if you’re targeting iOS.

Before we run the application, we need to link the react-native-gesture handler package, a dependency of react-navigation, and react-native-vector-icons, a dependency of native-base. By default, to keep the binary size of the application small, React Native doesn’t contain all the code to support native features. So whenever we need to use a particular feature, we can use the link command to add the native dependencies. So let’s link our dependencies:

react-native link react-native-gesture-handler react-native link react-native-vector-icons

Run the application:

npm run start:android # or npm run start:ios

If you get an error for missing dependencies, run npm i.

The code up to here is available under the v0.0 branch.

version 0

Tutorial

As we’ll be creating a database application, a lot of the code will be back-end only, and we won’t be able to see much on the front end. It might seem a long, but have patience and follow the tutorial till the end. You won’t regret it!

The WatermelonDB workflow can be categorized into three main parts:

Schema: used to define the database table schema. Models: the ORM mapped object. We’ll interact with these throughout our application. Actions: used to perform various CRUD operations on our object/row. We can directly perform an action using a database object or we can define functions in our model to perform these actions. Defining them in models is the better practice, and we’re going to use that only.

Let’s get started with our application.

Initialize DB Schema and WatermelonDB (v0.1)

We’ll define our schema, models and database object in our application. We won’t able to see much in the application, but this is the most important step. Here we’ll check that our application works correctly after defining everything. If anything goes wrong, it will be easy to debug it at this stage.

Project Structure

Create a new src folder in the root. This will be the root folder for all of our React Native code. The models folder is used for all of our database-related files. It will behave as our DAO (Data Access Object) folder. This is a term used for an interface to some type of database or other persistence mechanism. The components folder will have all of our React components. The screens folder will have all the screens of our application.

mkdir src && cd src mkdir models mkdir components mkdir screens Schema

Go to the models folder, create a new file schema.js, and use the following code:

// schema.js import { appSchema, tableSchema } from "@nozbe/watermelondb"; export const mySchema = appSchema({ version: 2, tables: [ tableSchema({ name: "movies", columns: [ { name: "title", type: "string" }, { name: "poster_image", type: "string" }, { name: "genre", type: "string" }, { name: "description", type: "string" }, { name: "release_date_at", type: "number" } ] }), tableSchema({ name: "reviews", columns: [ { name: "body", type: "string" }, { name: "movie_id", type: "string", isIndexed: true } ] }) ] });

We’ve defined two tables — one for movies, and another for its reviews. The code itself self-explanatory. Both tables have related columns.

Note that, as per WatermelonDB’s naming convention, all the IDs end with an _id suffix, and the date field ends with the _at suffix.

isIndexed is used to add an index to a column. Indexing makes querying by a column faster, at the slight expense of create/update speed and database size. We’ll be querying all the reviews by movie_id, so we should mark it as indexed. If you want to make frequent queries on any boolean column, you should index it as well. However, you should never index date (_at) columns.

Models

Create a new file models/Movie.js and paste in this code:

// models/Movie.js import { Model } from "@nozbe/watermelondb"; import { field, date, children } from "@nozbe/watermelondb/decorators"; export default class Movie extends Model { static table = "movies"; static associations = { reviews: { type: "has_many", foreignKey: "movie_id" } }; @field("title") title; @field("poster_image") posterImage; @field("genre") genre; @field("description") description; @date("release_date_at") releaseDateAt; @children("reviews") reviews; }

Here we’ve mapped each column of the movies table with each variable. Note how we’ve mapped reviews with a movie. We’ve defined it in associations and also used @children instead of @field. Each review will have a movie_id foreign key. These review foreign key values are matched with id in the movie table to link the reviews model to the movie model.

For date also, we need to use the @date decorator so that WatermelonDB will give us the Date object instead of a simple number.

Now create a new file models/Review.js. This will be used to map each review of a movie.

// models/Review.js import { Model } from "@nozbe/watermelondb"; import { field, relation } from "@nozbe/watermelondb/decorators"; export default class Review extends Model { static table = "reviews"; static associations = { movie: { type: "belongs_to", key: "movie_id" } }; @field("body") body; @relation("movies", "movie_id") movie; }

We have created all of our required models. We can directly use them to initialize our database, but if we want to add a new model, we again have to make a change where we initialize the database. So to overcome this, create a new file models/index.js and add the following code:

// models/index.js import Movie from "./Movie"; import Review from "./Review"; export const dbModels = [Movie, Review];

Thus we only have to make changes in our models folder. This makes our DAO folder more organized.

Initialize the Database

Now to use our schema and models to initialize our database, open index.js, which should be in the root of our application. Add the code below:

// index.js import { AppRegistry } from "react-native"; import App from "./App"; import { name as appName } from "./app.json"; import { Database } from "@nozbe/watermelondb"; import SQLiteAdapter from "@nozbe/watermelondb/adapters/sqlite"; import { mySchema } from "./src/models/schema"; import { dbModels } from "./src/models/index.js"; // First, create the adapter to the underlying database: const adapter = new SQLiteAdapter({ dbName: "WatermelonDemo", schema: mySchema }); // Then, make a Watermelon database from it! const database = new Database({ adapter, modelClasses: dbModels }); AppRegistry.registerComponent(appName, () => App);

We create an adapter using our schema for the underlying database. Then we pass this adapter and our dbModels to create a new database instance.

It’s better at this point in time to check whether our application is working fine or not. So run your application and check:

npm run start:android # or npm run start:ios

We haven’t made any changes in the UI, so the screen will look similar to before if everything worked out.

All the code up to this part is under the v0.1 branch.

The post Create an Offline-first React Native App Using WatermelonDB appeared first on SitePoint.

SitePoint Premium New Releases: Design Systems, SVG & React Native

Sep 6, 2019

Description:

We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Design Systems and Living Styleguides

Create structured, efficient and consistent designs with design systems and styleguides. Explore materials, typography, vertical rhythm, color, icons and more.

➤ Read Design Systems and Living Styleguides.

Build a Real-time Location Tracking App with React Native and PubNub

In this guide, we’re going to use React Native to create real-time location tracking apps. We’ll build two React Native apps — a tracking app and one that’s tracked.

➤ Read Build a Real-time Location Tracking App with React Native and PubNub.

Practical SVG

From software basics to build tools to optimization, you’ll learn techniques for a solid workflow.

Go deeper: create icon systems, explore sizing and animation, and understand when and how to implement fallbacks. Get your images up to speed and look sharp!

➤ Read Practical SVG.

Create an Offline-first React Native App Using WatermelonDB

In this tutorial we’ll create an offline-first movie search application with a title, poster image, genre, and release date. Each movie will have many reviews. We'll use WatermelonDB to provide the database functionality for our app.

➤ Read Create an Offline-first React Native App Using WatermelonDB.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Design Systems, SVG & React Native appeared first on SitePoint.

Build a Real-time Voting App with Pusher, Node and Bootstrap

Sep 4, 2019

Description:

In this article, I'll walk you through building a full-stack, real-time Harry Potter house voting web application.

Real-time apps usually use WebSockets, a relatively new type of transfer protocol, as opposed to HTTP, which is a single-way communication that happens only when the user requests it. WebSockets allow for persistent communication between the server and the user, and all those users connected with the application, as long as the connection is kept open.

A real-time web application is one where information is transmitted (almost) instantaneously between users and the server (and, by extension, between users and other users). This is in contrast with traditional web apps where the client has to ask for information from the server. — Quora

Our Harry Potter voting web app will show options (all the four houses) and a chart on the right side that updates itself when a connected user votes.

To give you a brief idea of look and feel, the final application is going to look like this:

Harry Potter with Chart JS

Here's a small preview of how the real-time application works:

To make our application real-time, we’re going to use Pusher and WebSockets. Pusher sits as a real-time layer between your servers and your clients. It maintains persistent connections to the clients — over a WebSocket if possible, and falling back to HTTP-based connectivity — so that, as soon as your servers have new data to push to the clients, they can do so instantly via Pusher.

Building our Application

Let’s create our fresh application using the command npm init. You’ll be interactively asked a few questions on the details of your application. Here's what I had:

praveen@praveen.science ➜ Harry-Potter-Pusher $ npm init { "name": "harry-potter-pusher", "version": "1.0.0", "description": "A real-time voting application using Harry Potter's house selection for my article for Pusher.", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "git+https://github.com/praveenscience/Harry-Potter-Pusher.git" }, "keywords": [ "Harry_Potter", "Pusher", "Voting", "Real_Time", "Web_Application" ], "author": "Praveen Kumar Purushothaman", "license": "ISC", "bugs": { "url": "https://github.com/praveenscience/Harry-Potter-Pusher/issues" }, "homepage": "https://github.com/praveenscience/Harry-Potter-Pusher#readme" } Is this OK? (yes)

So, I left most settings with default values. Now it's time to install dependencies.

Installing Dependencies

We need Express, body-parser, Cross Origin Resource Sharing (CORS), Mongoose and Pusher installed as dependencies. To install everything in a single command, use the following. You can also have a glance of what this command outputs.

praveen@praveen.science ➜ Harry-Potter-Pusher $ npm i express body-parser cors pusher mongoose npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN ajv-keywords@3.2.0 requires a peer of ajv@^6.0.0 but none is installed. You must install peer dependencies yourself. + pusher@2.1.2 + body-parser@1.18.3 + mongoose@5.2.6 + cors@2.8.4 + express@4.16.3 added 264 packages in 40.000s Requiring Our Modules

Since this is an Express application, we need to include express() as the first thing. While doing it, we also need some accompanying modules. So, initially, let’s start with this:

const express = require("express"); const path = require("path"); const bodyParser = require("body-parser"); const cors = require("cors"); Creating the Express App

Let’s start with building our Express application now. To start with, we need to get the returned object of the express() function assigned to a new variable app:

const app = express(); Serving Static Assets

Adding the above line after the initial set of includes will initialize our app as an Express application. The next thing we need to do is to set up the static resources. Let’s create a new directory in our current project called public and let’s use Express's static middleware to serve the static files. Inside the directory, let’s create a simple index.html file that says “Hello, World”:

<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>Hello, World</title> </head> <body> Hello, World! </body> </html>

To serve the static files, we have a built-in .use() function with express.static() in Express. The syntax is as follows:

app.use( express.static( path.join(__dirname, "public") ) );

We also need to use the body parser middleware for getting the HTTP POST content as JSON to access within the req.body. We'll also use urlencoded to get the middleware that only parses urlencoded bodies and only looks at requests where the Content-Type header matches the type option. This parser accepts only UTF-8 encoding of the body and supports automatic inflation of gzip and deflate encodings:

app.use( bodyParser.json() ); app.use( bodyParser.urlencoded( { extended: false } ) );

To allow cross-domain requests, we need to enable CORS. Let’s enable the CORS module by using the following code:

app.use( cors() );

Now all the initial configuration has been set. All we need to do now is to set a port and listen to the incoming connections on the specific port:

const port = 3000; app.listen(port, () => { console.log(`Server started on port ${port}.`); });

Make sure your final app.js looks like this:

const express = require("express"); const path = require("path"); const bodyParser = require("body-parser"); const cors = require("cors"); // Create an App. const app = express(); // Serve the static files from public. app.use( express.static( path.join(__dirname, "public") ) ); // Include the body-parser middleware. app.use( bodyParser.json() ); app.use( bodyParser.urlencoded( { extended: false } ) ); // Enable CORS. app.use( cors() ); // Set the port. const port = 3000; // Listen to incoming connections. app.listen(port, () => { console.log(`Server started on port ${port}.`); });

Run the command to start the server:

$ npm run dev

Open your http://localhost:3000/ on a new tab and see the magic. You should be seeing a new page with “Hello, World”.

Preview of Hello World in Browser

The post Build a Real-time Voting App with Pusher, Node and Bootstrap appeared first on SitePoint.

State Management in React Native

Sep 3, 2019

Description:

State Management in React Native

Managing state is one of the most difficult concepts to grasp while learning React Native, as there are so many ways to do it. There are countless state management libraries on the npm registry — such as Redux — and there are endless libraries built on top of other state management libraries to simplify the original library itself — like Redux Easy. Every week, a new state management library is introduced in React, but the base concepts of maintaining the application state has remained the same since the introduction of React.

The most common way to set state in React Native is by using React’s setState() method. We also have the Context API to avoid prop drilling and pass the state down many levels without passing it to individual children in the tree.

Recently, Hooks have emerged into React at v16.8.0, which is a new pattern to simplify use of state in React. React Native got it in v0.59.

In this tutorial, we’ll learn about what state actually is, and about the setState() method, the Context API and React Hooks. This is the foundation of setting state in React Native. All the libraries are made on top of the above base concepts. So once you know these concepts, understanding a library or creating your own state management library will be easy.

What Is a State?

Anything that changes over time is known as state. If we had a Counter app, the state would be the counter itself. If we had a to-do app, the list of to-dos would change over time, so this list would be the state. Even an input element is in a sense a state, as it over time as the user types into it.

Intro to setState

Now that we know what state is, let’s understand how React stores it.

Consider a simple counter app:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => {}} title="Increment" /> <Button onPress={() => {}} title="Decrement" /> </> ) } }

In this app, we store our state inside the constructor in an object and assign it to this.state.

Remember, state can only be an object. You can’t directly store a number. That’s why we created a counter variable inside an object.

In the render method, we destructure the counter property from this.state and render it inside an h1. Note that currently it will only show a static value (0).

You can also write your state outside of the constructor as follows:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => {}} title="Increment" /> <Button onPress={() => {}} title="Decrement" /> </> ) } }

Now let’s suppose we want the + and - button to work. We must write some code inside their respective onPress handlers:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => { this.setState({ counter: counter + 1 }) }} title="Increment" /> <Button onPress={() => { this.setState({ counter: counter - 1 }) }} title="Decrement" /> </> ) } }

Now when we click the + and - buttons, React re-renders the component. This is because the setState() method was used.

The setState() method re-renders the part of the tree that has changed. In this case, it re-renders the h1.

So if we click on +, it increments the counter by 1. If we click on -, it decrements the counter by 1.

Remember that you can’t change the state directly by changing this.state; doing this.state = counter + 1 won’t work.

Also, state changes are asynchronous operations, which means if you read this.state immediately after calling this.setState, it won’t reflect recent changes.

This is where we use “function as a callback” syntax for setState(), as follows:

import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => { this.setState(prevState => ({ counter: prevState.counter + 1 })) }} title="Increment" /> <Button onPress={() => { this.setState(prevState => ({ counter: prevState.counter - 1 })) }} title="Decrement" /> </> ) } }

The “function as a callback” syntax provides the recent state — in this case prevState — as a parameter to setState() method.

This way we get the recent changes to state.

What are Hooks?

Hooks are a new addition to React v16.8. Earlier, you could only use state by making a class component. You couldn’t use state in a functional component itself.

With the addition of Hooks, you can use state in functional component itself.

Let’s convert our above Counter class component to a Counter functional component and use React Hooks:

import React from 'react' import { Text, Button } from 'react-native' const Counter = () => { const [ counter, setCounter ] = React.useState(0) return ( <> <Text>{counter}</Text> <Button onPress={() => { setCounter(counter + 1 ) }} title="Increment" /> <Button onPress={() => { setCounter(counter - 1 ) }} title="Decrement" /> </> ) }

Notice that we’ve reduced our Class component from 18 to just 12 lines of code. Also, the code is much easier to read.

Let’s review the above code. Firstly, we use React’s built-in useState method. useState can be of any type — like a number, a string, an array, a boolean, an object, or any type of data — unlike setState(), which can only have an object.

In our counter example, it takes a number and returns an array with two values.

The first value in the array is the current state value. So counter is 0 currently.

The second value in the array is the function that lets you update the state value.

In our onPress, we can then update counter using setCounter directly.

Thus our increment function becomes setCounter(counter + 1 ) and our decrement function becomes setCounter(counter - 1).

React has many built-in Hooks, like useState, useEffect, useContext, useReducer, useCallback, useMemo, useRef, useImperativeHandle, useLayoutEffect and useDebugValue — which you can find more info about in the React Hooks docs.

Additionally, we can build our own Custom Hooks.

There are two rules to follow when building or using Hooks:

Only Call Hooks at the Top Level. Don’t call Hooks inside loops, conditions, or nested functions. Instead, always use Hooks at the top level of your React function. By following this rule, you ensure that Hooks are called in the same order each time a component renders. That’s what allows React to correctly preserve the state of Hooks between multiple useState and useEffect calls.

Only Call Hooks from React Functions. Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React functional components or call Hooks from custom Hooks.

By following this rule, you ensure that all stateful logic in a component is clearly visible from its source code.

Hooks are really simple to understand, and they’re helpful when adding state to a functional component.

The post State Management in React Native appeared first on SitePoint.

How to Redesign Unsplash Using Styled Components

Aug 29, 2019

Description:

Redesigning Unsplash Using Styled Components

Writing future-proof CSS is hard. Conflicting classnames, specificity issues, and so on, come up when you have to write and maintain thousands of lines of CSS. To get rid of the aforementioned issues, Styled Components was created.

Styled Components makes it easy to write your CSS in JS and makes sure there are no conflicting classnames or specificity issues with multiple other benefits. This makes writing CSS a joy.

In this tutorial, we’ll explore what CSS in JS is, the pros and cons of styled-components, and finally, we’ll redesign Unsplash using Styled Components. After completing this tutorial, you should be able to quickly get up and running with Styled Components.

Note: Styled Components was specifically built with React in mind, so you have to be using React to use Styled Components.

Prerequisites

For this tutorial, you need a basic knowledge of React.

Throughout the course of this tutorial we’ll be using yarn. If you don’t have yarn already installed, then install it from here.

To make sure we’re on the same page, these are the versions used in this tutorial:

Node 12.6.0 npx 6.4.1 yarn 1.17.3 Evolution of CSS

Before CSS-in-JS was created, the most common way to style web apps was to write CSS in a separate file and link it from the HTML.

But this caused trouble in big teams. Everyone has their own way of writing CSS. This caused specificity issues and led to everyone using !important.

Then came Sass. Sass is an extension of CSS that allows us to use things like variables, nested rules, inline imports and more. It also helps to keep things organized and allows us to create stylesheets faster.

Even though Sass might be thought of as an improvement over CSS, it arguably causes more harm than good without certain systems put in place.

Later, BEM came in. BEM is a methodology that lets us reduce specificity issues by making us write unique classnames. BEM does solve the specificity problem, but it makes the HTML more verbose. Classnames can become unnecessarily long, and it's hard to come up with unique classnames when you have a huge web app.

After that, CSS Modules were born. CSS Modules solved what neither Sass nor BEM could — the problem of unique classnames — by tooling rather than relying on the name given by a developer, which in turn solved specificity issues. CSS Modules gained a huge popularity in the React ecosystem, paving the way for projects like glamor.

The only problem with all these new solutions was that developers were made to learn new syntaxes. What if we could write CSS exactly how we write it in a .css file but in JS? And thus styled-components came into existence.

Styled Components uses Template Literals, an ES6 feature. Template literals are string literals allowing embedded expressions. They allow for multi-line strings and string interpolation features with them.

The main selling point of Styled Components is that it allows us to write exact CSS in JS.

Styled Components has a lot of benefits. Some of the pros and cons of Styled Components are listed below.

Pros

There are lots of advantages to using Styled Components.

Injecting Critical CSS into the DOM

Styled Components only injects critical CSS on the page. This means users only download CSS needed for that particular page and nothing else. This loads the web page faster.

Smaller CSS bundle per page

As it only injects styles that are used in the components on the page, bundle size is considerably smaller. You only load the CSS you need, instead of excessive stylesheets, normalizers, responsiveness, etc.

Automatic Vendor Prefixing

Styled Components allows you to write your CSS and it automatically vendor prefixes according to the latest standard.

Remove unused CSS

With Styled Components, it's easier to remove unused CSS or dead code, as the styles are colocated with the component. This also impacts on reducing bundle size.

Theming is easy

Styled Components makes it really easy to theme a React applications. You can even have multiple themes in your applications and yet easily maintain them.

Reduces the number of HTTP requests

Since there are no CSS files for resets, normalizers, and responsiveness, the number of HTTP requests are considerably reduced.

Unique Classnames

Styled Components generates unique classnames every time a build step takes place. This allows avoiding naming collisions or specificity issues. No more having global conflicts and being forced to resolve them with !important tags.

Maintenance is easy

Styled Components allows you to colocate styles with the component. This allows for painless maintenance. You know exactly which style is affecting your component, unlike in a big CSS file.

Cons

Of course, nothing's perfect. Let's look at some downsides associated with Styled Components.

Unable to Cache Stylesheets

Generally, a web browser caches .css files when a user visits a website for the next visit, so it doesn't have to download the same .css file again. But with styled-components, the styles are loaded in the DOM using the <style> tag. Thus, they can’t be cached and every time user has to request styles when they visit your website.

React specific

Styled Components was made with React in mind. Thus, it’s React specific. If you use any other framework, then you can’t use Styled Components.

However, there’s an alternative very similar to styled-components known as emotion which is framework agnostic.

The post How to Redesign Unsplash Using Styled Components appeared first on SitePoint.

4 Key Principles to Remember When Building B2B Ecommerce Websites

Aug 26, 2019

Description:

4 Key Principles to Remember When Building B2B Ecommerce Websites

This article was created in partnership with StudioWorks. Thank you for supporting the partners who make SitePoint possible.

B2B ecommerce businesses are currently facing a bit of a boom. Forrester estimates that B2B ecommerce revenues will reach $1.8 trillion in the US in the next four years. And a recent BigCommerce study found that 41% of B2B retailers predict their online sales to increase more than 25% by the end of the year.

So if you’re building a B2B ecommerce storefront to capitalize on this boom, it’s important that you take the time to ensure that the website has all the right functionality to receive and fulfill orders, and to deliver a great shopping experience to your buyers.

In this post, we’ll take a look at some of the key principles you’ll need to keep in mind when tackling a B2B ecommerce website build.

But before we begin, let’s put everything into a bit of context.

Key Differences Between B2C and B2B Ecommerce Sites

B2B ecommerce companies, of course, provide the goods and services that other companies need to operate and grow. In the ecommerce space, when we refer to a B2B company, we’re generally talking about firms that sell physical goods on a wholesale basis, but other types of B2B companies have been known to get into the ecommerce game.

For example, industrial suppliers or consultancy service providers are generally B2B companies, and they may or may not offer online purchasing options too. B2C companies, on the other hand, sell their products and services direct to individual customers.

b2c companiesImage source

Currently, the B2B ecommerce opportunity is huge compared to B2C ecommerce, which has become harder to crack due to high levels of competition and low barriers to entry. B2B buyers are becoming increasingly interested in making purchases online. Sellers, meanwhile, are only starting to make it possible.

But just because the demand is there doesn’t mean corporate buyers are expecting the same type of experiences from B2B ecommerce that they get on Amazon. Here are a few key differences between B2B and B2C, when it comes to ecommerce interfaces and customer experiences.

Breadth of audience

One major difference between B2B and B2C is the scale of their target audience. B2B sites deal with buyers who have simple, targeted profiles such as CTOs at tech startups. On the flip side, B2C sites have a broader group of people to cater to — for instance, moms with toddlers or millennials who are into sneakers.

For this reason, B2B ecommerce sites typically have a different purchasing user flow which involves more personalization.

Average price point

Most B2C ecommerce sites sell to hundreds of thousands of customers because their products typically sell at a lower price point. On the other hand, B2B sites may have less than 100 customers.

B2B ecommerce sites often use quote builders and set up different technology to be able to accept and process larger orders. For example, this may include options for recurring payments, bulk discounts, and shipping.

The decision-making process

B2C buying decisions are made fairly quickly, as they’re generally less rational and more based on impulse. Lower pricing points make this possible. In B2B decisions, the purchasing manager may have to get approval from senior executives, finance, marketing, and legal departments before placing an order.

To streamline the decision-making process, B2B ecommerce site owners offer tailored pricing to buyers. They also set up customer accounts to make it easy for buyers to fill out orders and complete transactions.

With the above in mind, let’s take a closer look at some of the important principles to guide you as you build your next B2B ecommerce website.

1. Integrate with an ERP Solution

As a B2B company, you’ll be able to significantly increase productivity by integrating an ERP solution with your ecommerce site.

The key benefit is that your inventory levels will automatically update in two places. Inventory availability figures can appear on the front end of the site as goods are added to inventory, giving customers a better shopping experience. Plus, with access to ERP data on the back end, you can enable your staff to easily meet orders and forecast product demand.

Another key benefit of integrating an ERP solution is that you won’t need to hire additional workers in case product demand goes up.

Here are some of the most common ERP integration patterns:

Migration. Data migration ERP refers to the movement of a particular set of data between two systems at a specific point in time. The migration can either be on an as-needed basis through an API or on command by setting the configuration parameters to pass into the API calls. Broadcast. The broadcast ERP integration pattern involves the transfer of data from one source system to multiple destination systems in real time. Broadcast systems help move data quickly between systems and keep multiple systems up to date across time. Aggregation. This ERP pattern receives data from multiple systems and stores it into only one system. It eliminates the need to regularly run multiple migrations, which removes the risk associated with data synchronization and accuracy. Bi-directional synchronization. Bi-directional sync ERP integration is useful in situations where different systems are required to perform different functions in the same data set. Correlation. Correlation is similar to bi-directional ERP integration. The difference is that the former synchronizes objects only if they’re present in both systems. Process APIImage source

BigCommerce offers a number of ERP integrations, including Brightpearl, Stitch Labs, NetSuite ERP Connector by Patchworks, and Acumatica Cloud ERP by Kensium via the eBridge Connections systems integrator.

The post 4 Key Principles to Remember When Building B2B Ecommerce Websites appeared first on SitePoint.

25+ JavaScript Shorthand Coding Techniques

Aug 26, 2019

Description:

Child between piles of books

This really is a must read for any JavaScript developer. I have written this guide to shorthand JavaScript coding techniques that I have picked up over the years. To help you understand what is going on, I have included the longhand versions to give some coding perspective.

August 26th, 2019: This article was updated to add new shorthand tips based on the latest specifications. If you want to learn more about ES6 and beyond, sign up for SitePoint Premium and check out our extensive library of modern JavaScript resources.

1. The Ternary Operator

This is a great code saver when you want to write an if..else statement in just one line.

Longhand:

const x = 20; let answer; if (x > 10) { answer = "greater than 10"; } else { answer = "less than 10"; }

Shorthand:

const answer = x > 10 ? "greater than 10" : "less than 10";

You can also nest your if statement like this:

const answer = x > 10 ? "greater than 10" : x < 5 ? "less than 5" : "between 5 and 10"; 2. Short-circuit Evaluation Shorthand

When assigning a variable value to another variable, you may want to ensure that the source variable is not null, undefined, or empty. You can either write a long if statement with multiple conditionals, or use a short-circuit evaluation.

Longhand:

if (variable1 !== null || variable1 !== undefined || variable1 !== '') { let variable2 = variable1; }

Shorthand:

const variable2 = variable1 || 'new';

Don’t believe me? Test it yourself (paste the following code in es6console):

let variable1; let variable2 = variable1 || 'bar'; console.log(variable2 === 'bar'); // prints true variable1 = 'foo'; variable2 = variable1 || 'bar'; console.log(variable2); // prints foo

Do note that if you set variable1 to false or 0, the value bar will be assigned.

3. Declaring Variables Shorthand

It's good practice to declare your variable assignments at the beginning of your functions. This shorthand method can save you lots of time and space when declaring multiple variables at the same time.

Longhand:

let x; let y; let z = 3;

Shorthand:

let x, y, z=3; 4. If Presence Shorthand

This might be trivial, but worth a mention. When doing “if checks”, assignment operators can sometimes be omitted.

Longhand:

if (likeJavaScript === true)

Shorthand:

if (likeJavaScript)

Note: these two examples are not exactly equal, as the shorthand check will pass as long as likeJavaScript is a truthy value.

Here is another example. If a is NOT equal to true, then do something.

Longhand:

let a; if ( a !== true ) { // do something... }

Shorthand:

let a; if ( !a ) { // do something... } 5. JavaScript For Loop Shorthand

This little tip is really useful if you want plain JavaScript and don't want to rely on external libraries such as jQuery or lodash.

Longhand:

const fruits = ['mango', 'peach', 'banana']; for (let i = 0; i < fruits.length; i++)

Shorthand:

for (let fruit of fruits)

If you just wanted to access the index, do:

for (let index in fruits)

This also works if you want to access keys in a literal object:

const obj = {continent: 'Africa', country: 'Kenya', city: 'Nairobi'} for (let key in obj) console.log(key) // output: continent, country, city

Shorthand for Array.forEach:

function logArrayElements(element, index, array) { console.log("a[" + index + "] = " + element); } [2, 5, 9].forEach(logArrayElements); // a[0] = 2 // a[1] = 5 // a[2] = 9 6. Short-circuit Evaluation

Instead of writing six lines of code to assign a default value if the intended parameter is null or undefined, we can simply use a short-circuit logical operator and accomplish the same thing with just one line of code.

Longhand:

let dbHost; if (process.env.DB_HOST) { dbHost = process.env.DB_HOST; } else { dbHost = 'localhost'; }

Shorthand:

const dbHost = process.env.DB_HOST || 'localhost'; 7. Decimal Base Exponents

You may have seen this one around. It’s essentially a fancy way to write numbers without the trailing zeros. For example, 1e7 essentially means 1 followed by 7 zeros. It represents a decimal base (which JavaScript interprets as a float type) equal to 10,000,000.

Longhand:

for (let i = 0; i < 10000; i++) {}

Shorthand:

for (let i = 0; i < 1e7; i++) {} // All the below will evaluate to true 1e0 === 1; 1e1 === 10; 1e2 === 100; 1e3 === 1000; 1e4 === 10000; 1e5 === 100000; 8. Object Property Shorthand

Defining object literals in JavaScript makes life much easier. ES6 provides an even easier way of assigning properties to objects. If the variable name is the same as the object key, you can take advantage of the shorthand notation.

The post 25+ JavaScript Shorthand Coding Techniques appeared first on SitePoint.

SitePoint Premium New Releases: Form Design + Cloning Tinder

Aug 23, 2019

Description:

We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Form Design Patterns

On first glance, forms are simple to learn. But when we consider the journeys we need to design, the users we need to design for, the browsers and devices being used; and ensuring that the result is simple and inclusive, form design becomes a far more interesting and bigger challenge.

➤ Read Form Design Patterns.

Cloning Tinder Using React Native Elements and Expo

In this tutorial, we’ll be cloning the most famous dating app, Tinder. We’ll then learn about a UI framework called React Native Elements, which makes styling React Native apps easy. Since this is just going to be a layout tutorial, we’ll be using Expo, as it makes setting things up easy.

➤ Read Cloning Tinder Using React Native Elements and Expo.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Form Design + Cloning Tinder appeared first on SitePoint.

How to Use Windows Subsystem for Linux 2 and Windows Terminal

Aug 22, 2019

Description:

Using Windows Subsystem for Linux 2 and Windows Terminal

In this article, you’ll learn how you can set up and run a local Linux shell interface in Windows without using a virtual machine. This not like using terminals such as Git Bash or cmder that have a subset of UNIX tools added to $PATH. This is actually like running a full Linux kernel on Windows that can execute native Linux applications. That's pretty awesome, isn't it?

If you’re an experienced developer, you already know that Linux is the best platform on which to build and run server-based solutions using open-source technologies. While it’s possible to run the same on Windows, the experience is not as great. The majority of cloud hosting companies offer Linux to clients to run their server solutions in a stable environment. To ensure software works flawlessly on the server machine just like on the local development machine, you need to run identical platforms. Otherwise, you may run into configuration issues.

When working with open-source technologies to build a project, you may encounter a dependency that runs great on Linux but isn’t fully supported on Windows. As a result, Windows will be required to perform one of the following tasks in order to contribute to the project:

Dual Boot Windows and Linux (switch to Linux to contribute code) Run a Linux virtual machine using a platform such as Vagrant, VirtualBox, VMWare etc. Run the project application inside a Docker container

All the above solutions require several minutes from launch to have a full Linux interface running. With the new Windows Subsystem for Linux version 2 (WSL2), it takes a second or less to access the full Linux shell. This means you can now work on Linux-based projects inside Windows with speed. Let's look into how we can set up one in a local machine.

Installing Ubuntu in Windows

First, you'll need to be running the latest version of Windows. In my case, it's build 1903. Once you've confirmed this, you'll need to activate the Windows Subsystem for Linux feature. Simply go to Control-Panel -> Programs -> Turn Windows feature on or off. Look for "Windows Subsystem for Linux" and mark the checkbox. Give Windows a minute or two to activate the feature. Once it's done, click the restart machine button that appears next.

Enabling the WSL feature

Next, go to the Windows Store and install Ubuntu. The first Ubuntu option will install the latest versions. Other Ubuntu options allow you to install an older supported version.

Microsoft Store Linux

Once the installation is complete, you'll need to launch it from the menu. Since this is the first time, you’ll need to wait for the Ubuntu image to be downloaded and installed on your machine. This is a one-time step. The next time you launch, you’ll access the Linux Shell right away.

Once the image installation is complete, you’ll be prompted to create a new root user account inside this shell:

Installing Ubuntu in the command line

After you’ve created your credentials, feel free to type any Linux command to confirm you’re truly accessing a native Linux shell:

Ubuntu usage commands

You’ll be pleased to note that git, python3, ssh, vim, nano, curl, wget and many other popular tools are available out of the box. In a later section, we'll use sudo apt-get command to install more frameworks. First, let's look at several ways we can access this new Linux shell terminal interface. It's probably a good idea to upgrade currently installed packages:

$ sudo apt-get update && sudo ap-get upgrade Accessing Linux Shell Interface

The are several interesting ways of accessing the Linux shell interface.

Go to Windows Menu Start > type "Ubuntu". You can pin it to Start for quicker access

Open Command Prompt or Windows PowerShell and execute the command bash

In Windows explorer, SHIFT + right-mouse click a folder to open a special context menu. Click Open Linux shell here.

In Windows explorer, navigate to any folder you desire, then in the address bar type wsl, then press enter.

In Visual Studio Code, change the default terminal to wsl.

VS Code WSL Terminal

If you come across new ways, please let me know. Let's set up Node.js in the following section.

The post How to Use Windows Subsystem for Linux 2 and Windows Terminal appeared first on SitePoint.

These Are the Best Developer Tools & Services

Aug 22, 2019

Description:

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

As you've learned through experience, there's much involved in trying to find the right developers' tools or services for the task at hand.

It's a challenge. More and more software products and services are appearing on the market. But, every year it doesn't get any easier. This can be especially true in some cases. One case is where app developers have been trying to bridge the gap between software development and operations.

As you will see, open-source solutions go a long way toward resolving some of these problems. There are services that developers can use and that way can save them both time and money.

That's the case with the 6 products and services described below.

The post These Are the Best Developer Tools & Services appeared first on SitePoint.

Getting Started with React Native

Aug 21, 2019

Description:

With the ever-increasing popularity of smartphones, developers are looking into solutions for building mobile applications. For developers with a web background, frameworks such as Cordova and Ionic, React Native, NativeScript, and Flutter allow us to create mobile apps with languages we’re already familiar with: HTML, XML, CSS, and JavaScript.

In this guide, we’ll take a closer look at React Native. You’ll learn the absolute basics of getting started with it. Specifically, we’ll cover the following:

what is React Native what is Expo how to set up an React Native development environment how to create an app with React Native Prerequisites

This tutorial assumes that you’re coming from a web development background. The minimum requirement for you to be able to confidently follow this tutorial is to know HTML, CSS, and JavaScript. You should also know how to install software on your operating system and work with the command line. We’ll also be using some ES6 syntax, so it would help if you know basic ES6 syntax as well. Knowledge of React is helpful but not required.

What is React Native?

React Native is a framework for building apps that work on both Android and iOS. It allows you to create real native apps using JavaScript and React. This differs from frameworks like Cordova, where you use HTML to build the UI and it will just be displayed within the device’s integrated mobile browser (WebView). React Native has built in components which are compiled to native UI components, while your JavaScript code is executed through a virtual machine. This makes React Native more performant than Cordova.

Another advantage of React Native is its ability to access native device features. There are many plugins which you can use to access native device features, such as the camera and various device sensors. If you’re in need of a platform-specific feature that hasn’t been implemented yet, you can also build your own native modules — although that will require you to have considerable knowledge of the native platform you want to support (Java or Kotlin for Android, and Objective C or Swift for iOS).

If you’re coming here and you’re new to React, you might be wondering what it is. React is a JavaScript library for the Web for building user interfaces. If you’re familiar with MVC, it’s basically the View in MVC. React’s main purpose is to allow developers to build reusable UI components. Examples of these components include buttons, sliders, and cards. React Native took the idea of building reusable UI components and brought it into mobile app development.

What is Expo?

Before coming here, you might have heard of Expo. It’s even recommended in the official React Native docs, so you might be wondering what it is.

In simple terms, Expo allows you to build React Native apps without the initial headache that comes with setting up your development environment. It only requires you to have Node installed on your machine, and the Expo client app on your device or emulator.

But that’s just how Expo is initially sold. In reality, it’s much more than that. Expo is actually a platform that gives you access to tools, libraries and services for building Android and iOS apps faster with React Native. Expo comes with an SDK which includes most of the APIs you can ask for in a mobile app development platform:

Camera ImagePicker Facebook GoogleSignIn Location MapView Permissions Push Notifications Video

Those are just few of the APIs you get access to out of the box if you start building React Native apps with Expo. Of course, these APIs are available to you as well via native modules if you develop your app using the standard React Native setup.

Plain React Native or Expo?

The real question is which one to pick up — React Native or Expo? There’s really no right or wrong answer. It all depends on the context and what your needs are at the moment. But I guess it’s safe to assume that you’re reading this tutorial because you want to quickly get started with React Native. So I’ll go ahead and recommend that you start out with Expo. It’s fast, simple, and easy to set up. You can dive right into tinkering with React Native code and get a feel of what it has to offer in just a couple of hours.

That said, I’ve still included the detailed setup instructions for standard React Native for those who want to do it the standard way. As you begin to grasp the different concepts, and as the need for different native features arises, you’ll actually find that Expo is kind of limiting. Yes, it has a lot of native features available, but not all the native modules that are available to standard React Native projects are supported.

Note: projects like unimodules are beginning to close the gap between standard React Native projects and Expo projects, as it allows developers to create native modules that works for both React Native and ExpoKit.

Setting Up the React Native Development Environment

In this section, we’ll set up the React Native development environment for all three major platforms: Windows, Linux, and macOS. We’ll also cover how to set up the Android and iOS simulators. Lastly, we’ll cover how to set up Expo. If you just want to quickly get started, I recommend that you scroll down to the “Setting up Expo” section.

Here are the general steps for setting up the environment. Be sure to match these general steps to the steps for each platform:

install JDK install Android Studio or Xcode install Watchman update the environment variable install the emulator install Node install React Native CLI

You can skip to the section relevant to your operating system. Some steps — like setting up Android Studio — are basically the same for each operating system, so I’ve put them in their own section:

setting up on Windows setting up on Linux setting up on macOS setting up Android Studio install Node setting up Expo setting up emulators install React Native CLI troubleshooting common errors Setting Up on Windows

This section will show you how to install and configure the software needed to create React Native apps on Windows. Windows 10 was used in testing for this.

Install Chocolatey

Windows doesn’t really come with its own package manager that we can use to install the needed tools. So the first thing we’ll do is install one called Chocolatey. You can install it by executing the following command on the command line or Windows Powershell:

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

We can now install the other tools we need by simply using Chocolatey.

Install Python

Python comes with the command line tools required by React Native:

choco install -y python 2 Install JDK

The JDK allows your computer to understand and run Java code. Be sure to install JDK version 8 as that’s the one required by React Native:

choco install jdk8 Install NVM

Node has an installer for Windows. It’s better to use NVM for Windows, as that will enable you to install multiple versions of Node so that you can test new versions, or use a different version depending on the project you’re currently working on. For that, you can use NVM for Windows. Download nvm-setup.zip, extract it and execute nvm-setup.exe to install it.

Install Watchman

Watchman optimizes the compilation time of your React Native app. It’s an optional install if you’re not working on a large project. You can find the install instructions on their website.

Update the Environment Variables

This is the final step in setting up React Native on Windows. This is where we update the environment variables so the operating system is aware of all the tools required by React Native. Follow these steps right before you install the React Native CLI.

Go to Control Panel → System and Security → System. Once there, click the Advanced system settings menu on the left.

Windows advanced system settings

That will open the system properties window. Click on the Environment Variables button:

System properties

Under the User variables section, highlight the Path variable and click the edit button.

On the edit screen, click the New button and enter the path to the Android SDK and platform tools. For me, it’s on C:\users\myUsername\AppData\Local\Android\Sdk and C:\users\myUsername\AppData\Local\Android\Sdk\platform-tools. Note that this is also where you add the path to the JDK if it isn’t already added:

add path

The post Getting Started with React Native appeared first on SitePoint.

How the Top 1% of Candidates Ace Their Job Interviews

Aug 19, 2019

Description:

You've done it.
 
You've made it through the initial screening process. You've just earned an interview with one of the most prestigious and successful companies in your industry. As you're waiting in the office with three other candidates, a fourth candidate walks in.
 
He has an interview scheduled, the same as you.
 
There's something odd about this interviewee. He already knows everyone there. He's on a first-name basis with the receptionist. Everyone likes him and thinks highly of him. Instead of waiting in the lobby with the rest of you, he's immediately ushered into one of the offices.
 
Who is this guy?
 

This is an everyday reality for elite job candidates

 
How is this possible?
 
Candidates like these are pretty uncommon. Not because they're so special, but because of their decision-making process. What makes their decision-making process different?
 

They win coveted jobs and promotions in the face of intense competition They ask for and receive substantially higher salaries than their coworkers Employers create positions specifically for them (to keep them) They earn positions before they're publicly available These job candidates seem to receive preferential treatment wherever they go
 
Something is going on, but what?
 
These elite candidates have a very different set of attitudes, behaviors, and habits than most other employees. Is it simply because they're better than everyone else?
 
Not at all.

The post How the Top 1% of Candidates Ace Their Job Interviews appeared first on SitePoint.

9 Key Ways to Turbocharge Your Design Career

Aug 19, 2019

Description:

9 Key Ways to Turbocharge Your Design Career

This article was created in partnership with StudioWorks. Thank you for supporting the partners who make SitePoint possible.

Sure, you need a certain minimal viable level of design skill prowess if you want to have a successful career as a designer. But a lot more than that goes into it, too. Think about how many people you know who can cook amazing food but who would never last five minutes in a restaurant kitchen during the lunch rush.

It would be great if we could just sit down, design pretty things, and go home. Or better yet, just chill in our home studios, creating. Unfortunately or not, design is a business just like everything else, and that means you’re going to have to put time, effort, and sometimes money into cultivating the soft skills and business side of your design career.

This means managing your time well, marketing yourself, building a brand, experimenting, maybe launching a side business, and generally just putting your name and work out there for people to find.

These days, it’s not enough to have a portfolio. That’s just table stakes. You need to plan out your whole career — with the understanding that plans change.

Let’s look at some of the things you need to do to develop your career until you’re basically the next Jen Simmons or Jeffrey Zeldman.

Get Your Communication Skills Flowing

Communication skills come naturally to some, and not so naturally to others. In both cases, those skills are rather drastically affected by the people you have to communicate with most. Most of us find ways to convey our thoughts and intentions clearly to our friends, and also to people in our industry and hobby communities. We learn the lingo, we learn which topics encourage discussion, and which are best avoided.

Writing for anyone who’s not a part of your immediate community, and especially writing for people who don’t know what you know, is hard. Speaking to them in person can be harder, depending on how you, as a person, prefer to communicate. But, all the same, you have to.

Even if you work in an agency amongst other designers right now, there will inevitably come a time when you have to pitch clients on the benefits of your work, explain to a newbie the processes you use, or defend your decisions to developers who push back, or to other people who just don’t know what you know.

If there’s any single thing you take away from this article, focus on your communication skills. It will affect your career more than anything else on this list. If you’re looking for a place to start learning those skills, CopyBlogger always has you covered — at least for the writing side of it.

Branch Out into Side Businesses

Some side projects are great, strictly because they allow us to get out of our comfort zones, try new things and regain a sense of creative discovery.

Others may overlap with the activities you’d use to build a personal brand, which we’ll get into shortly, with the added benefit that they can bring in extra money while you’re establishing yourself as an expert in the field.

Here are some of the more popular ways of doing this.

1. Courses

Sure, you can throw some tutorials onto your blog, or onto YouTube, for free. And you probably should. But if you want to make a side business out of teaching others what you do, and further your career in the process, you’re going to need an actual product. This is where courses come in.

Quality video courses, which are quite popular these days, can be expensive and time-consuming to set up. It’s gotten easier, though, now that you can use all-in-one course development and delivery services like Kajabi. This platform can help you manage everything relating to your premium educational content and running the business around it.

You can create membership sites, host live events, create automation funnels, upsells, maintain a blog and manage contacts all in one place, so it’s not as hard as it used to be. However, you still have to get a half-decent camera, a half-decent microphone, and ideally learn some basic video editing skills.

This is a side hustle I’d frankly only recommend if you’ve got some time on your hands, and a bit of extra money for some beginner hardware. It can be quite rewarding, though, so don’t dismiss the idea out of hand.

2. Live Streaming

I mentioned live events in the last section, so I thought I’d mention streaming as its own thing. Streaming doesn’t have to be educational, although education is probably the best way to sell your expertise. You could just sit there and share designer memes on Twitch if you want.

The problem is mostly that the requirements for video and audio haven’t changed, and depending on how you set up your stream schedule, it can be even more demanding than making video courses.

Then again, if you don’t mind not making a lot of money, and want to do it for fun, it’s still a great way to “meet” new people, and to be seen.

3. Paid Newsletters

Now this is an option I’d save for when you’ve already built a bit of an audience by other means, such as social media and/or blogging. But Substack has made it easier than ever for people to pay writers directly.

If you’ve got wisdom to share, and if you think people would be willing to pay to have that wisdom beamed straight into their inboxes, go on and have at it.

4. Make Stuff for Other Designers

Plenty of designers and agencies have kept up a healthy “passive” revenue stream by making resources for other designers.

Be it a template, a WordPress theme, a Sketch UI kit, an icon font, or whatever else, if it’s valuable to you because it solves problems that you have, then there’s a good chance your peers will be willing to pay for it. Just don’t forget to also give stuff away once in a while. Gratitude goes a long way in the design world.

The post 9 Key Ways to Turbocharge Your Design Career appeared first on SitePoint.

SitePoint Premium New Releases: Going Offline + React Native

Aug 16, 2019

Description:

We're working hard to keep you on the cutting edge of your field with SitePoint Premium. We've got plenty of new books to check out in the library — let us introduce you to them.

Going Offline

Jeremy Keith introduces you to service workers (and the code behind them) to show you the latest strategies in offline pages. Learn the ins and outs of fetching and caching, enhance your website’s performance, and create an ideal offline experience for every user, no matter their connection.

➤ Read Going Offline.

Integrating AdMob in React Native and Expo

Google AdMob is one way to install ads into any mobile application in order to monetize it. Installing and configuring AdMob in bare React Native can be a cumbersome process. But it’s relatively simple to install when using a toolchain like Expo — we'll show you how.

➤ Read Integrating AdMob in React Native and Expo.

And More to Come…

We're releasing new content on SitePoint Premium regularly, so we'll be back next week with the latest updates. And don't forget: if you haven't checked out our offering yet, take our library for a spin.

The post SitePoint Premium New Releases: Going Offline + React Native appeared first on SitePoint.

How to Build a Cipher Machine with JavaScript

Aug 15, 2019

Description:

I was overjoyed recently when I read the news that the British mathematician, Alan Turing will feature on the Bank of England's new £50 note. Turing occupies a special place in the hearts of computer nerds for effectively writing the blueprints for the computer. He also helped to break the notoriously difficult naval Enigma code used by the Nazi U-boats in World War II. In honor of this I decided a quick tutorial to build a cipher machine using the JavaScript skills that are covered in my book JavaScript Novice To Ninja.

The cipher we'll be using is the Caesar cipher, named after the Roman emperor, Julius Caesar. It is one of the most simple ciphers there are and simply shifts each letter along a set number of places. For example, the phrase 'Hello World' would become 'KHOOR ZRUOG' using a shift of 3 (which it is the shift that Julius Caesar is thought to have used).

Our cipher machine

You can see an example of the finished code here. Have a play around at writing some secret messages to get a feel for how it works.

To get started, fire up your favorite text editor and save the following as caesar.html:

The post How to Build a Cipher Machine with JavaScript appeared first on SitePoint.

10 Tools to Help You Manage Your Agile Workflows

Aug 15, 2019

Description:

Workflows

This article was created in partnership with monday.com. Thank you for supporting the partners who make SitePoint possible.

Software development remains a complex task which balances analysis, planning, budget constraints, coding, testing, deployment, issue fixing, and evaluation. Large projects often fail because no one can comprehend the full extent of requirements from the start. Those requirements then change with each revision of the product.

An agile development approach can mitigate the risks. There are many flavors of 'agile', but most rapidly evolve a product over time. Self-organising teams of stakeholders, designers, developers, and testers collaborate to produce a minimum viable product which is extended and revised during a series of iterations - or sprints.

Ideally, a fully-working product is available at the end of every sprint. Changing requirements can determine the priorities for the next sprint.

Crucial Collaboration

Communication distinguishes agile from more traditional waterfall workflows. Teams work together on a particular feature so developers and designers can quickly provide feedback when a requirement becomes impractical or more cost-effective options can be identified.

A variety of tools and software is available to help teams collaborate. There are two general options:

Separate tools for specific tasks. For example, a feature may be described in a document which is transferred to a to-do list which becomes a pull request and inevitably has bugs reported. All-in-one tools which manage the whole process.

The following tools can all help manage your agile workflow.

monday.com

monday.com has rapidly become the full agile management solution for 80,000 organizations within a few years.

monday.com dashboard

monday.com offers a completely customizable application for numerous use-cases such as agile project management. Powerful features include:

quick-start project templates (there are over 100 template that are completely customisable to fit your needs) attractive at-a-glance project state dashboards, so you can easily track progress and identify bottlenecks in a "big picture" view intuitive collaboration with team members and clients using @mentions easy file sharing, so you'll always know where your most updated files are multiple views to track progress (reports, Kanban boards, Gantt charts, calendars, timelines etc.) task management, time and deadline tracking automations and integration with other applications to keep everything in one place, so you can focus on the important stuff.

Prices start from $25 per month for five users, but a 30 day free trial is available so you can assess the system.

The post