Sumedh Meshram

A Personal Blog

Healthcheck endpoints in C# in MVC projects using ASP.NET Core, and writing results to Azure Application Insights

Healthcheck endpoints in C# in MVC projects using ASP.NET Core, and writing results to Azure Application Insights

Every developer wants to build a system that never breaks, but in reality things go wrong. The best systems are built to expect that and handle problems, that rather than just silently failing.

Maybe your database becomes unavailable (e.g. runs out of hard disk space) and your failover doesn’t work – or maybe a third party web service that you depend on stops working.

Sometimes your application can be programmed to recover from things going wrong – here’s my post on The Polly Project to find out more about one way of doing that – but when there’s a catastrophic failure that you can’t recover from, you want to be alerted as soon as it happens, rather than hear from a customer.

And it’s kind to provide a way for your customers to find out about the health of your system. As an example, just check out the monitoring hub below from Postcodes.io – this is a great example of being transparent about key system metrics like service status, availability, performance, and latency.

postcode

MVC projects in ASP.NET Core have a built in feature to provide information on the health of your website. It’s really simple to add it to your site, and this instrumentation comes packaged as part of the default ASP.NET Core toolkit. There are also some neat extensions available on NuGet to format the data as JSON, add a nice dashboard for these healthchecks, and finally to push the outputs to Azure Application Insights. As I’ve been implementing this recently, I wanted to share with the community how I’ve done it.

Scott Hanselman has blogged about this previously, but there have been some updates since he wrote about this which I’ve included in my post.

Returning system health from an ASP.NET Core v2.2 website

Before I start – I’ve uploaded all the code to GitHub here so you can pull the project and try yourself. You’ll obviously need to update subscription keys, instrumentation keys and connection strings for databases etc.

Edit your MVC site’s Startup.cs file and add the line below to the ConfigureServices method:

services.AddHealthChecks();

And then add the line of code below to the Configure method.

app.UseHealthChecks("/healthcheck");

That’s it. Now your website has a URL available to tell whether it’s healthy or not. When I browse to my local test site at the URL below…

http://localhost:59658/healthcheck

..my site returns the word “Healthy”. (obviously your local test site’s URL will have a different port number, but you get the idea)

So this is useful, but it’s very basic. Can we amp this up a bit – let’s say want to see a JSON representation of this? Or what about our database status? Well fortunately, there’s a great series of libraries from Xabaril (available on GitHub here) which massively extend the core healthcheck functions.

Returning system health as JSON

First, install the AspNetCoreHealthChecks.UI NuGet package.

Install-Package AspNetCore.HealthChecks.UI

Now I can change the code in my StartUp.cs file’s Configure method to specify some more options.

The code below changes the response output to be JSON format, rather than just the single word “Healthy”.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    });

And as you can see in the image below, when I browse to the healthcheck endpoint I configured as “/healthcheck”, it’s now returning JSON:

healthcheck basic json

What about checking the health of other system components, like URIs, SQL Server or Redis?

Xabaril has got you covered here as well. For these three types of things, I just install the NuGet packages with the commands below:

Install-Package AspNetCore.HealthChecks.Uris
Install-Package AspNetCore.HealthChecks.Redis
Install-Package AspNetCore.HealthChecks.SqlServer

Check out the project’s ReadMe file for a full list of what’s available.

Then change the code in the ConfigureServices method in the project’s Startup.cs file.

services.AddHealthChecks()
        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:59658/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded);

Obviously in the example above, I have my connection strings stored in my appsettings.json file.

When I browse to the healthcheck endpoint now, I get much a richer JSON output.

health json

Can this information be displayed in a more friendly dashboard?

We don’t need to just show JSON or text output – Xabaril allows the creation of a clear and simple dashboard to display the health checks in a user friendly form. I updated my code in the StartUp.cs file – first of all, my ConfigureServices method now has the code below:

services.AddHealthChecks()
        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:59658/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded);
        
services.AddHealthChecksUI(setupSettings: setup =>
{
    setup.AddHealthCheckEndpoint("Basic healthcheck", "https://localhost:59658/healthcheck");
});

And my Configure method also has the code below.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    });
 
app.UseHealthChecksUI();

Now I can browse to a new endpoint which presents the dashboard below:

http://localhost:59658/healthchecks-ui#/healthchecks

health default ui
And if you don’t like the default CSS, you can configure it to use your own. Xabaril has an example of a css file to include here, and I altered my Configure method to the code below which uses this CSS file.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    })
    .UseHealthChecksUI(setup =>
    {
        setup.AddCustomStylesheet(@"wwwroot\css\dotnet.css");
    });
 
app.UseHealthChecksUI();

And now the website is styled slightly differently, as you can see in the image below.

health styled ui

What happens when a system component fails?

Let’s break something. I’ve turned off SQL Server, and a few seconds later the UI automatically refreshes to show the overall system health status has changed – as you can see, the SQL Server check has been changed to a status of “Degraded”.

health degrades

And this same error appears in the JSON message.

health degraded json

Can I monitor these endpoints in Azure Application Insights?

Sure – but first make sure your project is configured to use Application Insights.

If you’re not familiar with Application Insights and .NET Core applications, check out some more information here.

If it’s not set up already, you can add the Application Insights Telemetry by right clicking on your project in the Solution Explorer window of VS2019, selecting “Add” from the context menu, and choosing “Application Insights Telemetry…”. This will take you through the wizard to configure your site to use Application Insights.

aitel

Once that’s done, I changed the code in my Startup.cs file’s ConfigureServices method to explicitly push to Application Insights, as shown in the snippet below:

services.AddHealthChecks()
        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:44398/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded)
        .AddApplicationInsightsPublisher();
        
services.AddHealthChecksUI(setupSettings: setup =>
{
    setup.AddHealthCheckEndpoint("Basic healthcheck", "https://localhost:44398/healthcheck");
});

Now I’m able to view these results in the Application Insights – the way I did this was:

  • First browse to portal.azure.com and click on the “Application Insights” resource which has been created for your web application (it’ll probably be top of the recently created resources).
  • Once that Application Insights blade opens, click on the “Metrics” menu item (highlighted in the image below):

app insights metrics

When the chart windows opens – it’ll look like the image below – click on the “Metric Namespace” dropdown and select the “azure.applicationinsights” value (highlighted below).

app insights custom metric

Once you’ve selected the namespace to plot, choose the specific metric from that namespace. I find that the “AspNetCoreHealthCheckStatus” metric is most useful to me (as shown below).

app insights status

And finally I also choose to display the “Min” value of the status (as shown below), so if anything goes wrong the value plotted will be zero.

app insights aggregation

After this, you’ll have a graph displaying availaility information for your web application. As you can see in the graph below, it’s pretty clear when I turned on my SQL Server instance again so the application health went from a overall health status of ‘Degraded’ to ‘Healthy’.

application insights

Wrapping up

I’ve covered a lot of ground in this post – from .NET Core 2.2’s built in HealthCheck extensions, building on that to use community content to check other site resources like SQL Server and Redis, adding a helpful dashboard, and finally pushing results to Azure Application Insights. I’ve also created a bootstrapper project on GitHub to help anyone else interested in getting started with this – I hope it helps you.

5 DevOps tools you should know in 2019

5 DevOps tools you should know in 2019

Written by Marius Rimkus
on July 22, 2019

DevOps culture is now an integral part of every tech savvy business and plays a role in many business processes, ranging from project planning to software delivery. As cloud services are prevailing today, the requirement of related supplementary services is growing rapidly. DevOpstechnologies are increasing as well, so how one should choose the right tools to automate his work? There are a lot of opinions, but I will share the list of DevOps technologies I find the most important to master in 2019.

 

 

ansible logoAnsible

Ansible is a quite simple software provisioning, configuration management and application deployment tool, which ensures faster time-to-market for your applications. No matter if you are a one man company or an enterprise, you can automate orchestration, cloud provisioning, computing machines deployment and other tasks. I like Ansible because it is not as complex as Puppet or Chef, but speeds up productivity just as well.

  • Ansible playbooks are written in YAML, which is one of the easiest data-serialization languages for creating configuration files.
  • It’s fast, performs all its functions over SSH and doesn't require agent installation.
  • It allows you to create groups of servers, describe how these should be configured and what actions should be performed on these machines.

 

Jenkins logo Jenkins

A lot of DevOps engineers call Jenkins the best CI/CD tool available in the market, since it’s incredibly useful. Jenkins is an automation server that is written in Java and is used to report changes, conduct live testing and distribute code across multiple machines. As Jenkins has a built-in GUI and over 1000 plugins to support building and testing your application, it is considered a really powerful, yet easy to use tool. Thanks to these plugins, Jenkins integrates well with practically every other instrument in the continuous integration and continuous delivery toolchain.

  • Easy to install and a lot of support available from community.
  • 1000+ plugins available and easy to create your own, if needed.
  • It can be used to publish results and send email notifications.

 

Docker logo Docker

Docker is a software containerization platform that allows DevOps to build, ship, and run distributed processes within containers. This gives developers the ability to create predictable environments that are isolated from the rest of the applications and can be run anywhere. Containers are isolated but share the same OS kernel. This way you get to use hardware resources more efficiently compared to virtual machines. 

Each container can hold a single process, like web server or database management system. You can create a cluster of containers distributed across different nodes to have your application up and running in both, load balancing and high availability, modes. Containers can communicate on a private network, as you most probably want to keep some of your application parts private for security purposes. Simply expose your web server to the Internet and you are good to go.

What I like most is that you can install Docker on your computer to run containers locally to make some ad-hoc software tests without installing its dependencies globally. When you are done, you simply terminate your Docker container and your computer is as clean as new.

  • Build once, run anywhere! You can package an application from your laptop and run it unmodified on any public/private cloud or bare metal server.
  • Containers are lightweight and fast.
  • Docker Hub offers many official and community-built public Docker images.
  • Separating different components of a large application into containers have security benefits: if one container is compromised, others remain unaffected.

 

Kubernetes logo Kubernetes

While Docker allows developers to build, ship and run applications in containers easily, Kubernetes makes running containers in a cluster as easy as ever. You can automatically deploy, scale, monitor and manage your cloud-native application with Kubernetes. It is a powerful orchestrator that allows you to manage communication between containerized components, known as pods, and coordinate them as a cluster. 

Kubernetes has now become the heart of a microservices application. The ecosystem around it is expanding by the minute with Cloud Native Computing Foundation ensuring its future success. There are now many additional observability, networking and distributed data storage services that complement Kubernetes in building a loosely coupled distributed system that is resilient, manageable and observable.

  • Open-source orchestrator.
  • Easy container management.
  • Horizontal autoscaling - if you get high loads, you can replicate your pods and balance the load across them to avoid downtime.
  • Self-healing, Automated Rollouts and Rollbacks - if something goes wrong, you can automatically replace, restart, reschedule your containers or rollout/rollback to the desired state of the containerized application.
  • Service Discovery - Kubernetes uses unique IP addresses and can put a set of containers behind a single DNS name. This allows you easily track and identify your across the cluster.

 

Rabbitmq logo RabbitMQ

A great messaging and queuing tool which you can use for applications that runs on most operating systems. Managing queues, exchanges and routing with it is a breeze. Even if you have elaborate configuration to be built, it’s relatively easy to do so, since the tool is really well-documented. You can stream a lot of different high-performance processes and avoid system crashes through a friendly user interface. It ‘s a durable and robust messaging broker that is worth your attention. As RabbitMQ developers like to say, it’s "messaging that just works".

What is Helm and why you should love it?

Helm is the first application package manager running atop Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands.

 

Why is Helm important? Because it’s a huge shift in the way the server-side applications are defined, stored and managed. Adoption of Helm might well be the key to mass adoption of microservices, as using this package manager simplifies their management greatly.

What is Helm and why you should love it

Why are microservices so important? They have quite a few uses:

  • When there are several microservices instead of a monolithic application, each microservice can be managed, updated and scaled individually
  • Issues with one microservice do not affect the functionality of other components of the application
  • New application can be easily composed of existing loosely-coupled microservices

Of course, Helm is not the unique package manager, nor is it perfect. However, the project is now being actively developed and grows a passionate community that appreciates the benefits of using Helm charts for software development.

Helm benefits and flaws

Unlike Homebrew or Aptitude desktop package managers, or Azure Resource Manager templates (ARMs) / Amazon Machine Images (AMIs) that are run on a single server, Helm charts are built atop Kubernetes and benefit from its cluster architecture. The main benefit of this approach is the ability to consider scalability from the start. The charts of all the images used by Helm are stored in a registry called Helm Workspace, so the DevOps teams can search them and add to their projects with ease.

What is Helm and why you should love it 1

For example, you need to launch a website built with WordPress, Joomla, Django or any other CMS. You expect the website to receive millions of daily visitors from the day one and you must make sure such huge numbers of connections will not lead to freezes or service unavailability.

Using virtualization capabilities ensures scaling, yes. Just keep in mind that an AMI, ARM (or a Docker container for that matter) you use to launch the app will be dependent on the Virtual Machine it is stored on and will be able to scale only the way the virtual machines are scaled — by adding more resources to the pool.

With Helm, we have quite another picture. The application can be composed of clearly defined microservices and we can scale only the ones we need to scale, adding more Kubernetes nodes and pods to the cluster. Instead of working with a holistic image and growing all the resources, you operate a set of images and scale them independently.

The problems begin when you want to launch a new instance of an application that runs, let’s say, 50 microservices. Starting and combining them all will be a laborious and error-prone task. However, with Helm, all you need to know is the name of the charts for the images responsible. Launching a new instance is the question of executing the corresponding Helm chart.

The only significant issue with Helm so far was the fact that when 2 helm charts have the same labels they interfere with each other and impair the underlying resources. This means it’s better to compose a new image for the project than adding a single Helm chart to it and affects the rollbacks too. However, the community has found workarounds for the issue and we are sure it will be removed for good in the future versions of the tool.

Final thoughts on the future of Helm

We are sure Kubernetes is the future of container orchestration in the cloud, and Helm is the way to use Kubernetes most efficiently. Of course, the DevOps team can do the same using standard kubectl commands, yet working with Helm provides the ability to quickly define, cleanly manage and easily deploy the applications. Thus said, the Kubernetes + Helm duo can (and must) become the basic toolset for any DevOps specialist in the years to come, namely a helm to navigate the cloud and deliver the containers safely.

Artificial Intelligence in Software Development and Testing

According to Gartner, artificial intelligence will be omnipresent in all spheres of technology and will successfully make its presence prominent among the top investment priority of CIOs by 2020. Going by the figures of the market research firm, the worldwide scope for artificial intelligence in 2019 is approximately $6.36 billion in North America.

Technical masters like Amazon, Facebook, Google, and many others spend a huge sum of money on acquiring AI in software.

AI emerged as an enterprise technology and has changed the outlook of everything, including software development and software testing. It is, therefore, important that we take a minute to look into the role of artificial intelligence in software development and testing.

Higher Level of Precision

It is natural for humans to make errors. Even highly skilled testers sometimes end up making mistakes while performing annual testing. With automated testing, similar steps can be executed with precision every time the task of testing is undertaken and never miss an opportunity to notify specific outcomes. Testers are exempted from ongoing manual examination and they can have a more significant proportion of time to develop new automated software tests and manage chic properties.

Artificial intelligence can help to overcome the drawbacks of annual testing. It is practically unsustainable for leading software or quality assurance (QA) segments to perform a well-managed web app test with more than thousands of users. With the help of automated testing, the user can trigger tens, hundreds, or thousands of optical group of users who can communicate with a network, software, or web-based app.

Massive Support for Developers as Well as Testers

Developers can utilize the shared tests conducted by the computing device to monitor errors instantly before sending it for quality assurance. These tests can function automatically as and when the source code alterations are examined based on which the squad or the app builder can be notified accordingly if a test result turns out to be unsuccessful. Different properties like these help in securing a time for developers and boosts up their self-confidence.

Leveraging the Whole Test Scope

In software testing, with the help of artificial intelligence, the user can leverage the complete coverage and depth of tests, thereby leading to massive enhancement in software quality. Artificial intelligence-driven software testing enables looking into the storage capacity and document content, internal strategy states, and data tables to ascertain whether the software is acting as it should or not. On the whole, test automation can perform more than a thousand various test cases in each test run, offering a scope that would never have been possible through manual testing.

Less Time-Consuming and Helps in Quick Marketing

With the help of software tests being replicated, every time a source code is altered, repetitive manual tests can prove to be time-consuming and tremendously expensive.

On the other hand, once developed, machine learning and testing together can be performed continuously without the need to incur any extra expenses.

The total time taken for software testing can be reduced from two or three days to a few hours, which indirectly helps to save money.

To Wrap Up

Integrating artificial intelligence (AI) with software testing and software development can help to build a society where software can be swiftly examined, diagnosed, and modified.

Artificial intelligence testing will permit high-quality engineering and will decrease the total time taken for examination and development. As a result, it will help to secure time, money, and resources; while allowing testers to pay attention to performing prime activities such as launching quality software.

 

Build A Serverless Function

In this tutorial, you’ll build and publish a serverless function that generates QR codes, using Cloudflare Workers.

Demo

This tutorial makes use of Wrangler, our command-line tool for generating, building, and publishing projects on the Cloudflare Workers platform. If you haven’t used Wrangler, we recommend checking out the “Installing the CLI” part of our Quick Start guide, which will get you set up with Wrangler, and familiar with the basic commands.

If you’re interested in building and publishing serverless functions, this is the guide for you! No prior experience with serverless functions or Cloudflare Workers is assumed.

One more thing before you start the tutorial: if you just want to jump straight to the code, we’ve made the final version of the codebase available on GitHub. You can take that code, customize it, and deploy it for use in your own projects. Happy coding!

Prerequisites

To publish your QR Code Generator function to Cloudflare Workers, you’ll need a few things:

  • A Cloudflare account, and access to the API keys for that account
  • A Wrangler installation running locally on your machine, and access to the command-line

If you don’t have those things quite yet, don’t worry. We’ll walk through each of them and make sure we’re ready to go, before you start creating your application.

You’ll need to get your Cloudflare API keys to deploy code to Cloudflare Workers: see “Finding your Cloudflare API keys” for a brief guide on how to find them.

Generate

Cloudflare’s command-line tool for managing Worker projects, Wrangler, has great support for templates – pre-built collections of code that make it easy to get started writing Workers. We’ll make use of the default JavaScript template to start building your project.

In the command line, generate your Worker project, using Wrangler’s worker-template, and pass the project name “qr-code-generator”:

wrangler generate qr-code-generator
cd qr-code-generator

Wrangler templates are just Git repositories, so if you want to create your own templates, or use one from our Template Gallery, there’s a ton of options to help you get started.

Cloudflare’s worker-template includes support for building and deploying JavaScript-based projects. Inside of your new qr-code-generator directory, index.js represents the entry-point to your Cloudflare Workers application.

All Cloudflare Workers applications start by listening for fetch events, which are fired when a client makes a request to a Workers route. When that request occurs, you can construct responses and return them to the user. This tutorial will walk you through understanding how the request/response pattern works, and how we can use it to build fully-featured applications.

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

/**
 * Fetch and log a request
 * @param {Request} request
 */
async function handleRequest(request) {
  return new Response('Hello worker!', { status: 200 })
}

In your default index.js file, we can see that request/response pattern in action. The handleRequest constructs a new Response with the body text “Hello worker”, as well as an explicit status code of 200.

When a fetch event comes into the worker, the script uses event.respondWith to return that new response back to the client. This means that your Cloudflare Worker script will serve new responses directly from Cloudflare’s cloud network: instead of continuing to the origin, where a standard server would accept requests, and return responses, Cloudflare Workers allows you to respond quickly and efficiently by constructing responses directly on the edge.

Build

Any project you publish to Cloudflare Workers can make use of modern JS tooling like ES modules, NPM packages, and async/await functions to put together your application. In addition, simple serverless functions aren’t the only thing you can publish on Cloudflare Workers: you can build full applications using the same tooling and process as what we’ll be building today.

The QR code generator we’ll build in this tutorial will be a serverless function that runs at a single route and receives requests. Given text sent inside of that request (such as URLs, or strings), the function will encode the text into a QR code, and serve the QR code as a PNG response.

Handling requests

Currently, our Workers function receives requests, and returns a simple response with the text “Hello worker!”. To handle data coming in to our serverless function, check if the incoming request is a POST:

async function handleRequest(request) {
  if (request.method === 'POST') {
    return new Response('Hello worker!', { status: 200 })
  }
}

Currently, if an incoming request isn’t a POST, response will be undefined. Since we only care about incoming POST requests, populate response with a new Responsewith a 500 status code, if the incoming request isn’t a POST:

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = new Response('Hello worker!', { status: 200 })
  } else {
    response = new Response('Expected POST', { status: 500 })
  }
  return response
}

With the basic flow of handleRequest established, it’s time to think about how to handle incoming valid requests: if a POST request comes in, the function should generate a QR code. To start, move the “Hello worker!” response into a new function, generate, which will ultimately contain the bulk of our function’s logic:

const generate = async request => {
  return new Response('Hello worker!', { status: 200 })
}

async function handleRequest(request) {
  // ...
  if (request.method === 'POST') {
    response = await generate(request)
  // ...
}

Building a QR Code

All projects deployed to Cloudflare Workers support NPM packages, which makes it incredibly easy to rapidly build out a lot of functionality in your serverless functions. The qr-image package is a great way to take text, and encode it into a QR code, with support for generating the codes in a number of file formats (such as PNG, the default, and SVG), and configuring other aspects of the generated QR code. In the command-line, install and save qr-image to your project’s package.json:

npm install --save qr-image

In index.js, require the qr-image package as the variable qr. In the generatefunction, parse the incoming request as JSON, using request.json, and use the text to generate a QR code using qr.imageSync:

const qr = require('qr-image')

const generate = async request => {
  const body = await request.json()
  const text = body.text
  const qr_png = qr.imageSync(text || 'https://workers.dev')
}

By default, the QR code is generated as a PNG. Construct a new instance of Response, passing in the PNG data as the body, and a Content-Type header of image/png: this will allow browsers to properly parse the data coming back from your serverless function, as an image:

const generate = async request => {
  // ...
  return new Response(qr_png, { headers })
}

With the generate function filled out, we can simply wait for the generation to finish in handleRequest, and return it to the client as response:

async function handleRequest(request) {
  // ...
  if (request.method === 'POST') {
    response = await generate(request)
  // ...
}

Testing In a UI

The serverless function will work if a user sends a POST request to a route, but it would be great to also be able to test it with a proper interface. At the moment, if any request is received by your function that isn’t a POST, a 500 response is returned. The new version of handleRequest should return a new Response with a static HTML body, instead of the 500 error:

const landing = `
<h1>QR Generator</h1>
<p>Click the below button to generate a new QR code. This will make a request to your serverless function.</p>
<input type="text" id="text" value="https://workers.dev"></input>
<button onclick='generate()'>Generate QR Code</button>
<p>Check the "Network" tab in your browser's developer tools to see the generated QR code.</p>
<script>
  function generate() {
    fetch(window.location.pathname, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ text: document.querySelector("#text").value })
    })
  }
</script>
`

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = await generate(request)
  } else {
    response = new Response(landing, { headers: { 'Content-Type': 'text/html' } })
  }
  return response
}

The landing variable, which is a static HTML string, sets up an input tag and a corresponding button, which calls the generate function. This function will make an HTTP POST request back to your serverless function, allowing you to see the corresponding QR code image data inside of your browser’s network inspector:

With that, your serverless function is complete! The full version of the code looks like this:

const qr = require('qr-image')

const generate = async request => {
  const { text } = await request.json()
  const headers = { 'Content-Type': 'image/png' }
  const qr_png = qr.imageSync(text || 'https://workers.dev')
  return new Response(qr_png, { headers })
}

const landing = `
<h1>QR Generator</h1>
<p>Click the below button to generate a new QR code. This will make a request to your serverless function.</p>
<input type="text" id="text" value="https://workers.dev"></input>
<button onclick='generate()'>Generate QR Code</button>
<p>Check the "Network" tab in your browser's developer tools to see the generated QR code.</p>
<script>
  function generate() {
    fetch(window.location.pathname, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ text: document.querySelector("#text").value })
    })
  }
</script>
`

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = await generate(request)
  } else {
    response = new Response(landing, { headers: { 'Content-Type': 'text/html' } })
  }
  return response
}

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

Publish

And with that, you’re finished writing the code for the QR code serverless function, on Cloudflare Workers!

Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, we’ll run wrangler publish, which will build and publish your code:

Publish

Resources

In this tutorial, you built and published a serverless function to Cloudflare Workers for generating QR codes. If you’d like to see the full source code for this application, you can find it on GitHub.

If you enjoyed this tutorial, we encourage you to explore our other tutorials for building on Cloudflare Workers:

If you want to get started building your own projects, check out the quick-start templates we’ve provided in our Template Gallery.

DevOps: The Journey So Far and What Lies Ahead

If you have been in the IT industry for over 10 years, I am sure you have seen the evolution and massive transformation DevOps brought in as organizations continue to shift from optimizing for the cost to optimizing for speed, and that shift is exponentially increasing its pace to adopt DevOps. Today, when I see life before and after DevOps, I can easily see that some terminologies have been changed in our daily life primarily because of DevOps adoption.

a) Manual => Automated

b) Physical Datacenter => Virtual Private Cloud

c) Outages => High Availability/Zero downtime

d) Enterprise/Web archives => Containers

and the list goes on and on…

DevOps, which started as a buzzword, is now becoming a standard for every organization in order to meet the demands of time to market and release better products to stay ahead of the competition, and that’s the reason big names like Google, Netflix, Amazon, and Facebook are heavily investing in it and have experienced the value coming out of it.

So, what exactly has DevOps changed?

No More Working in Silos

DevOps has improved the software development culture and mindset. Welcoming new changes, blameless culture, transparency, accountability, embracing failures, right collaborations, and communication between different teams are some of the keys organizations have enabled to successfully unlock DevOps culture.

Time to Market

DevOps enables organizations to develop and deploy software faster and more efficiently enabled by an end-to-end automated and integrated process using CI/CD pipelines. Continuous Delivery allows developers to continuously roll out tested code that is always in a production-ready state and can be released to production based on business approval. As soon as a new feature or story is complete, the code is immediately available for deployment to test environment, UAT, or production.

DevOps-as-Code

Over the last few years, there has been a tremendous development on how automation has been done. Pipeline-as-Code to automate CI/CD pipelines, Config-as-Code to manage configurations and orchestration tasks, Infrastructure-as-Code to automate environment provisioning are gaining momentum. Languages like Groovy and Python where the core OOPS concepts are required are being used for most of the automation. Also, unit test cases are written for every automation to validate their code similar to how application development tests their code. This has enabled and encourages many application developers to understand DevOps workflows holistically, gain expertise, and contribute, which was earlier mere a black box for them.

Software Killed Hardware

Gone were the days when sysadmins were heavily involved in receiving new hardware, and then setting and configuring new servers with each server having a custom configuration. Today the servers, network, firewalls, load balancers, and everything else is virtual, living somewhere at Amazon or Google or Microsoft. Today, you write software to provision, manage and decommission infrastructure. Upgrading resources to a new server, adding identical servers, securing infrastructure are all driven by software.

Containers and Microservices to Maximize Deployment Velocity

Microservices have enabled developers to have the freedom to make changes to one service, create a Docker image, and deploy it independently without impacting other services in the system. If there is an issue in any service, it can easily be isolated to one single service so that a fast rollback can be made easily. This speed of deployment with minimal risk is the primary reason why organizations like Netflix and Amazon have adopted microservices-based architectures, ensuring they are eliminating as many bottlenecks as possible to release the application to end users. Platform-as-a-Service tools like Amazon ECS, Google Kubernetes, and Redhat OpenShift have helped enterprises to adapt to microservices architecture and migrating their existing applications to microservices and dockerized in production.

Cloud Encourages the Birth of Many Startup Businesses

Cloud computing provides an added advantage to the start-up business. Businesses previously required heavy time and money for housing, powering, and cooling infrastructure. With the cloud, there are limited upfront capital costs as it provides the ability to match revenue with expenses since you pay only for the resources you use. For a festive season or other cases of peak traffic, you can easily scale up and down your infrastructure.

DevOps Predictions

With technology evolving at a rapid pace, DevOps will continue to gain momentum and break barriers. Here are high-level predictions what lies ahead in DevOps world.

Cloud Migration

Cloud adoption will continue to evolve. New startup businesses are already adopting cloud for hosting their applications. There has been a substantial increase in the big enterprise giants who migrated their physical data centre to the cloud and this trend will continue to gain momentum for their survival.

Continuous Deployment Is So Close, Yet So Far

While continuous integration and continuous delivery practices have enabled organizations to release new features to the market in the most efficient way, there are hardly any buyers who want to adopt continuous deployment. Product-based companies like Amazon and Netflix are making frequent strides with continuous deployment, but financial firms are still focused on having a robust application and infrastructure with performance and security being their primary areas, with the desire to release features to market using a manual trigger.

Serverless Computing

Renowned training company A Cloud Guru is running their application on serverless architecture. There are no infrastructure costs they have to pay, as they pay based on the number of visits they get to their course content. This allows them to offer their courses cheaply, which also provides them with an added competitive advantage as compared to their competitors. AWS Lambda, Google Cloud Functions, Azure Functions are all examples of Function-as-a-Service platforms that support serverless architecture. The only concerns many IT leaders shared are their fears over vendor lock-in. Choosing a cross-vendor programming language and adopting standardized services over fully managed services provided OOTB by cloud provider are two ways to avoid vendor lock-in fear when adopting serverless architecture.

DevSecOps 

Security is going to play a major role as more and more applications will be migrated to the cloud. Whether it’s an application, VM, container or an entire network, one needs to understand the entire process and lifecycle and ensure there is no corner left for any vulnerability for an outsider to hack into your application or your system and this can be achieved only when you integrate all flavours of security testing in your DevOps process.

SRE for Service Management

In general, an SRE team’s responsibility is to ensure the service is available all the time, and that application health and monitoring and emergency response that has been done by the Ops team. However, this is changing and organizations are looking for engineers who can code as well and take care of ops. For example, Google has put a cap of 50% on the overall ops work for all SREs and in the remaining 50% of the time, SREs are actually doing development. They found this model has many advantages as SREs are directly modifying code, building and supporting the system, and bridging the gap between the product development teams and SREs during cross training of new features release.

Cognitive DevOps

Cognitive DevOps will excel in developing an automated system that should be capable of resolving problems and providing solutions without any human intervention. It uses machine learning algorithms that will help deal with the real-time challenges faced in DevOps by gathering and analyzing data across different environments which will eventually lead to smooth and error-free releases. IT operations analytics, network performance analytics, security analytics, application performance management, digital performance management, and algorithmic IT operations are some of the key areas vendors are targeting to implement cognitive operations in the journey to move from DevOps to NoOps.

I would like to summarize this blog by stating that technology is changing at a very rapid pace and DevOps is fueling the demand with its workflows, tools, and practices. With so much already achieved in the last decade and so much to achieve in years to come, the DevOps journey ahead will be exciting and full of surprises.

Which Country Has The Best Programming Language Programmer?

Programming Language is at the heart of every technological innovation. Therefore, a country with best computer programmers can be considered technologically advanced in today's world. Comparing countries to determine which of them have the best computer programmers is slightly complicated as different countries have different popular size. Luckily, HackerRank makes it easy with their own set of metrics to measure the excellence of the programmers of different countries. According to HackerRank, the following is the list of the top 10 countries with the best computer programmers.
 
1. China - The reason for China to occupy the top position is not its population. The metrics for the evaluation of the best computer programmers is speed and accuracy. HackerRank holds special challenges on its website annually to determine the best programmers countrywise. The challenges focus on coding skills, data structure and algorithmic concepts, mathematical and analytical skills, and functional programming. The participants from China have outshined all other countries collectively.
2. Russia - It is said that Russia has the best hackers in the world and the world has allegedly seen their hacking skills. To be a hacker, you need to be a programmer to the topmost level. Russian programmers scored 99.9 when China programmers scored the full mark. However, they have come out better than China in algorithms.
 
3. Poland - This can come as a surprise to many as Poland is not known to many as a country with many multi-national tech companies. However, if you know how good the education system is in Poland, you will not wonder why they have managed to rank so high. Computer programming is taught in the lower classes in schools. Therefore, by the time the students go out of high school, they have master computer programming languages like Java and Python. This is also reflected by the fact that Poland programmers have won Java challenge on HackerRank ahead of all other countries.
 
4. Switzerland - Switzerland is the country with headquarters of multiple international tech companies. In fact, Switzerland computer programmers are the most dominant on the scoreboard of HackerRank challenges. It is interesting to note Switzerland is where one of the foremost computer programming languages Pascal has come from. Besides, Switzerland is among the leading countries in the Global Innovation Index.
 
5. Hungary - The Hungarian Government has introduced programming classes in primary and secondary schools, and therefore, the students are grooming to be programmers from childhood. They have the best performance in tutorial challenges on HackerRank. It is somewhat surprising to many that among various other technologically advanced European countries, Hungary is among top 5. It is all about the education system and grooming from an early stage.
6. Japan - Japan is now known as the country of cryptocurrency. The revolutionary blockchain technology has originated from Japan and now ruling the world. In fact, according to HackerRank challenges, Japan is the leader in artificial intelligence. This only shows the intelligence and skill set of the Japanese computer programmers. Japan has literally transformed in the last decade and labeled as one of the leaders in innovations.
 
7. Taiwan - Taiwan, and China go hand in hand, and Taiwan is considered to be one of the most advanced countries in technologies. They are super fast in adapting to the new programming languages, and according to a survey, Python is the most dominant language in the country. On HackerRank, computer programmers from Taiwan are one of the leaders in algorithms, data structure, and functional programming challenges. Therefore, the programmers are an all-rounder, and it is this all-around growth that is accelerating the country to a new height in the technological field.
 
8. France - The French Government made major changes in the education system to inspire students to become computer programmers.  Just like Poland, they have started to offer programming classes in elementary schools since 2014, and the result is here to see. Their rank is decreasing every year, and they are climbing up faster than most countries on HackerRank board. 9. Czech Republic -
 
9. Czech Republic - According to HackerRank, Czech Republic has the most dominating computer programmers in shell scripting, and it is proved through several challenges they have held. The programmers also rank second in the mathematical challenges which reflect their skill in functional programming.
 
10. Italy - Italy is slowly but steadily becoming one of the emerging countries in computer programming. Big companies are investing hugely in Italy to bag the top programmers in the country. Apple announced a new school for nearly 1000 programmers in Italy. The programmers from the country have performed exceedingly well on HackerRank in database and tutorial challenges. Some of you might be surprised to find that the US or India do not feature among the top 10 countries. India ranks at 31st while the US ranks at 13th as per HackerRank ranking based on challenges organized on the website. 
 

 

Source: HOB

 

 

Cutting Edge - REST and Web API in ASP.NET Core

I’ve never been a fan of ASP.NET Web API as a standalone framework and I can’t hardly think of a project where I used it. Not that the framework in itself is out of place or unnecessary. I just find that the business value it actually delivers is, most of the time, minimal. On the other hand, I recognize in it some clear signs of the underlying effort Microsoft is making to renew the ASP.NET runtime pipeline. Overall, I like to think of ASP.NET Web API as a proof of concept for what today has become ASP.NET Core and, specifically, the new runtime environment of ASP.NET Core.

Web API was primarily introduced as a way to make building a RESTful API easy and comfortable in ASP.NET. This article is about how to achieve the same result—building a RESTful API—in ASP.NET Core.

The Extra Costs of Web API in Classic ASP.NET

ASP.NET Web API was built around the principles sustaining the Open Web Interface for .NET (OWIN) specification, which is meant to decouple the Web server from hosted Web applications. In the .NET space, the introduction of OWIN marked a turning point, where the tight integration of IIS and ASP.NET was questioned. That tight coupling was fully abandoned in ASP.NET Core.

Any Web façade built using the ASP.NET Web API framework relies on a completely rewritten pipeline that uses the standard OWIN interface to dialog with the underlying host Web server. Yet, an ASP.NET Web API is not a standalone application. To be available for callers it needs a host environment that takes care of listening to some configured port, captures incoming requests and dispatches them down the Web API pipeline.

A Web API application can be hosted in a Windows service or in a custom console application that implements the appropriate OWIN interfaces. It can also be hosted by a classic ASP.NET application, whether targeting Web Forms or ASP.NET MVC. Over the past few years, hosting Web API within a classic ASP.NET MVC application proved to be a very common scenario, yet one of the least effective in terms of raw performance and memory footprint.

As Figure 1 shows, whenever you arrange a Web API façade within an ASP.NET MVC application, three frameworks end up living side-by-side, processing every single Web API request. The host ASP.NET MVC application is encapsulated in an HTTP handler living on top of system.web—the original ASP.NET runtime environment. On top of that—taking up additional memory—you have the OWIN-based pipeline of Web API.

Frameworks Involved in a Classic ASP.NET Web API Application 
Figure 1 Frameworks Involved in a Classic ASP.NET Web API Application

The vision of introducing a server-independent Web framework is, in this case, significantly weakened by the constraints of staying compatible with the existing ASP.NET pipeline. Therefore, the clean and REST-friendly design of Web API doesn’t unleash its full potential because of the legacy system.web assembly. From a pure performance perspective, only some edge use cases really justify the use of Web API.

Effective Use Cases for Web API

Web API is the most high-profile example of the OWIN principles in action. A Web API library runs behind a server application that captures and forwards incoming requests. This host can be a classic Web application on the Microsoft stack (Web Forms, ASP.NET MVC) or it can be a console application or a Windows service.

In any case, it has to be an application endowed with a thin layer of code capable of dialoging with the Web API listener.

Hosting a Web API outside of the Web environment removes at the root any dependency on the system.web assembly, thus magically making the request pipeline as lean and mean as desired.

This is the crucial point that led the ASP.NET Core team to build the ASP.NET Core pipeline. The ideal hosting conditions for Web API have been reworked to be the ideal hosting conditions for just about any ASP.NET Core application. This enabled a completely new pipeline devoid of dependencies on the system.web assembly and hostable behind an embedded HTTP server exposing a contracted interface—the IServer interface.

The OWIN specification and Katana, the implementation of it for the IIS/ASP.NET environment, play no role in ASP.NET Core. But the experience with these platforms matured the technical vision (especially with Web API edge cases), which shines through the dazzling new pipeline of ASP.NET Core.

The funny thing is that once the entire ASP.NET pipeline was redesigned—deeply inspired by the ideal hosting environment for Web API—that same Web API as a separate framework ceased to be relevant. In the new ASP.NET Core pipeline there’s the need for just one application model—the MVC application model—based on controllers, and controller classes are a bit richer than in classic ASP.NET MVC, thus incorporating the functions of old ASP.NET controllers and Web API controllers.

Extended ASP.NET Core Controllers

In ASP.NET Core, you work with controller classes whether you intend to serve HTML or any other type of response, such as JSON or PDF. A bunch of new action result types have been added to make building RESTful interfaces easy and convenient. Content negotiation is fully supported for any controller classes, and formatting helpers have been baked into the action invoker infrastructure. If you want to build a Web API that exposes HTTP endpoints, all you do is build a plain controller class, as shown here:

 
public class ApiController : Controller
{
  // Your methods here
}

The name of the controller class is arbitrary. While having /api somewhere in the URL is desirable for clarity, it’s in no way required. You can have /api in the URL being invoked both if you use conventional routing (an ApiController class) to map URLs to action methods, or if you use attribute routing. In my personal opinion, attribute routing is probably preferable because it allows you to expose multiple endpoints with the same /api item in the URL, while being defined in distinct, arbitrarily named controller classes.

The Controller class in ASP.NET Core has a lot more features than the class in classic ASP.NET MVC, and most of the extensions relate to building a RESTful Web API. First and foremost, all ASP.NET Core controllers support content negotiation. Content negotiation refers to a silent negotiation taking place between the caller and the API regarding the actual format of returned data.

Content negotiation doesn’t happen all the time and for just every request. It takes place only if the incoming request contains an Accept HTTP header that advertises the MIME types the caller is able to understand. In this case, the ASP.NET Core infrastructure goes through the types listed in the header content until it finds one for which a formatter exists in the current configuration of the application. If no matching formatter is found in the list of types, then the default JSON formatter is used, like so:

 
[HttpGet]
public ObjectResult Get(Guid id)
{
  // Do something here to retrieve the resource data
  var data = FindResourceDataInSomeWay(id);
  return Ok(data);
}

Another remarkable aspect of content negotiation is that while it won’t produce any change in the serialization process without an Accept HTTP header, it’s technically triggered only if the response being sent back by the controller is of type ObjectResult. The most common way to return an ObjectResult action result type is by serializing the response via the Ok method. It’s important to note that if you serialize the controller response via, say, the Json method, no negotiation will ever take place regardless of the headers sent. Support for output formatters can be added programmatically through the options of the AddMvc method. Here’s an example:

 
services.AddMvc(options =>
{
  options.OutputFormatters.Add(new PdfFormatter());
});

In this example, the demo class PdfFormatter contains internally the list of supported MIME types it can handle.

Note that by using the Produces attribute you override the content negotiation, as shown here:

 
[Produces("application/json")]
public class ApiController : Controller
{
  // Action methods here
}

The Produces attribute, which you can apply at the controller or method level, forces the output of type ObjectResult to be always serialized in the format specified by the attribute, regardless of the Accept HTTP header.

For more information on how to format the response of a controller method, you might want to check out the content at bit.ly/2klDgdY.

REST-Oriented Action Result Types

Whether a Web API is better off with a REST design is a highly debatable point. In general, it’s safe enough to say that the REST approach is based on a known set of rules and, in this regard, it is more standard. For this reason, it’s generally recommended for a public API that’s part of the enterprise business. If the API exists only to serve a limited number of clients—mostly under the same control of the API creators—then no real business difference exists between using REST design route or a looser remote-procedure call (RPC) approach.

In ASP.NET Core, there’s nothing like a distinct and dedicated Web API framework. There are just controllers with their set of action results and helper methods. If you want to build a Web API whatsoever, you just return JSON, XML or whatever else. If you want to build a RESTful API, you just get familiar with another set of action results and helper methods. Figure 2 presents the new action result types that ASP.NET Core controllers can return. In ASP.NET Core, an action result type is a type that implements the IActionResult interface. 

Figure 2 Web API-Related Action Result Types  

Type Description
AcceptedResult Returns a 202 status code. In addition, it returns the URI to check on the ongoing status of the request. The URI is stored in the Location header.
BadRequestResult Returns a 400 status code.
CreatedResult Returns a 201 status code. In addition, it returns the URI of the resource created, stored in the Location header.
NoContentResult Returns a 204 status code and null content.
OkResult Returns a 200 status code.
UnsupportedMediaTypeResult Returns a 415 status code.


Note that some of the types in Figure 2 come with buddy types that provide the same core function but with some slight differences. For example, in addition to AcceptedResult and CreatedResult, you find xxxAtActionResult and xxxAtRouteResult types. The difference is in how the types express the URI to monitor the status of the accepted operation and the location of the resource just created. The xxxAtActionResult type expresses the URI as a pair of controller and action strings whereas the xxxAtRouteResult type uses a route name.

OkObjectResult and BadRequestObjectResult, instead, have an xxxObjectResult variation. The difference is that object result types also let you append an object to the response. So OkResult just sets a 200 status code, but OkObjectResult sets a 200 status code and appends an object of your choice. A common way to use this feature is to return a ModelState dictionary updated with the detected error when a bad request is handled.

Another interesting distinction is between NoContentResult and EmptyResult. Both return an empty response, but NoContentResult sets a status code of 204, whereas EmptyResult sets a 200 status code. All this said, building a RESTful API is a matter of defining the resource being acted on and arranging a set of calls using the HTTP verb to perform common manipulation operations. You use GET to read, PUT to update, POST to create a new resource and DELETE to remove an existing one. Figure 3 shows the skeleton of a RESTful interface around a sample resource type as it results from ASP.NET Core classes.

Figure 3 Common RESTful Skeleton of Code
 
[HttpGet]
public ObjectResult Get(Guid id)
{
  // Do something here to retrieve the resource
  var res = FindResourceInSomeWay(id);
  return Ok(res);
}
[HttpPut]
public AcceptedResult UpdateResource(Guid id, string content)
{
  // Do something here to update the resource
  var res = UpdateResourceInSomeWay(id, content);
  var path = String.Format("/api/resource/{0}", res.Id);
  return Accepted(new Uri(path));  
}
[HttpPost]
public CreatedResult AddNews(MyResource res)
{
  // Do something here to create the resource
  var resId = CreateResourceInSomeWay(res);
  // Returns HTTP 201 and sets the URI to the Location header
  var path = String.Format("/api/resource/{0}", resId);
  return Created(path, res);
}
[HttpDelete]
public NoContentResult DeleteResource(Guid id)
{
  // Do something here to delete the resource
  // ...
  return NoContent();
}

If you’re interested in further exploring the implementation of ASP.NET Core controllers for building a Web API, have a look at the GitHub folder at bit.ly/2j4nyUe.

Wrapping Up

A Web API is a common element in most applications today. It’s used to provide data to an Angular or MVC front end, as well as to provide services to mobile or desktop applications. In the context of ASP.NET Core, the term “Web API” finally achieves its real meaning without ambiguity or need to further explain its contours. A Web API is a programmatic interface comprising a number of publicly exposed HTTP endpoints that typically (but not necessarily) return JSON or XML data to callers. The controller infrastructure in ASP.NET Core fully supports this vision with a revamped implementation and new action result types. Building a RESTful API in ASP.NET has never been easier!

5 tools for programmers to increase productivity

 

Programming complex code is undoubtedly a difficult task. Programmers often rely on certain online tools to make life easier and achieve speed and accuracy. These tools allow developers to create, test and debug the software. 

With constant technological advancements, developers are looking to enhance their productivity and stay updated with the evolving skill requirements. Here are some tools that programmers must explore to be more productive. 

#1. GitKraken

Quoting from their website, “Axosoft GitKraken is a cross-platform Git client with efficiency, elegance and reliability at the core. It is made for developers by developers”. 

GitKraken is known for its user-friendly interface, easy switching between projects and graphical interface which helps developers to visualize project branches effectively. 

#2. Visual Studio Code

Assuring a frictionless edit-build-debug cycle, VS code ensures high productivity while syntax highlighting, bracket-matching, box-selection and more. Additionally, it supports a wide variety of languages. For debugging, VS provides an interactive debugger to inspect codes and execute commands. 

#3. Docker

Docker is an open source tool which enables developers to create, deploy and run applications using containers. This guarantees that the application will run efficiently across Linux platforms irrespective of the undergone customizations. Docker requires applications to be shipped with things that are not already running on the host computer, significantly boosting the performance. 

#4. Chrome DevTools 

Chrome DevTools is a set of tools built directly into the Google Chrome browser. Websites can be designed better and faster using this tool as it allows the developers to edit pages on-the-go and rectify problems swiftly. It caters to the needs of both beginners and experts by teaching the basics as well as performing higher-level operations like optimizing website speed. 

#5. Postman 

Through design, testing and full production, Postman simplify API development for developers ensuring greater productivity. Developers can create automated tests to monitor their API and examine responses for debugging among other functions. With almost 6 million users, Postman is a widely used productivity tool within the developer community.

25 basic Linux terminal commands to remember

25 basic Linux terminal commands to remember

On Linux, the command-line is a powerful tool. Once you understand how to use it, it’s possible to accomplish a whole lot of advanced operations really fast. Sadly, new users find the Linux command-line confusing, and don’t know where to start.

In an effort to educate new users on the Linux command-line, we’ve made a list of 25 basic Linux terminal commands to remember. Let’s get started!

1. ls

ls is the list directory command. In order to use it, launch a terminal window and type the command ls.

 
 

 

 
 

 

ls

The ls command can also be used to reveal hidden files with the “a” command line switch.

ls -a

2. cd

cd is how you change directories in the terminal. To swap to a different directory from where the terminal started, do:

cd /path/to/location/

It is also possible to go backwards up a directory by using “..”.

cd ..

3. pwd

To show the current directory in the linux terminal use the pwd command.

 
 

 

 
 

 

pwd

4. mkdir

If you’d like to create a new folder, use the mkdir command.

mkdir

To preserve the permissions of the folder to match the permissions of the directory that came before it, use the “p” command line switch.

mkdir -p name-of-new-folder

5. rm

To delete a file from the command line, use the rm command.

rm /path/to/file

rm can also be used to delete a folder if there are files inside of it by making use of the “rf” command line switch.

rm -rf /path/to/folder

6. cp

Want to make a copy of a file or folder? Use the cp command.

 
 

 

 
 

 

To copy a file, use cp followed by the location of the file.

cp /path/to/file

Or, to copy a folder, use cp with the “r” command line switch

cp -r /path/to/folder

7. mv

The mv command can do a lot of things on Linux. It can move files around to different locations, but it can also rename files.

To move a file from one location to another, try the following example.

mv /path/to/file /place/to/put/file|

If you want to move a folder, write the location of the folder followed by the desired location where you’d like to move it.

mv /path/to/folder /place/to/put/folder/

Lastly, to rename a file or folder, cd into the directory of the file/folder you’d like to rename, and then use the mv command, for example:

mv name-of-file new-name-of-file

Or, for a folder, do:

mv name-of-folder new-name-of-folder

8. cat

The cat command lets you view the contents of files in the terminal. To use cat write the command out followed by the location of the file you’d like to view. For example:

 
 
cat /location/of/file

9. head

Head lets you view the top 10 lines of a file. To use it, enter the head command followed by the location of the file.

head /location/of/file

10. tail

Tail lets you view the bottom 10 lines of a file. To use it, enter the tail command followed by the location of the file.

tail /location/of/file

11. ping

On Linux, the ping command lets you check the latency between your network and a remote internet or LAN server.

 
 
ping website.com

Or

ping IP-address

To ping only a few times, use the ping command followed by the “c” command line switch and a number. For example, to ping Google 3 times, do:

ping google.com -c3

12. uptime

To check how long your Linux system has been online, use the uptime command.

uptime

13. uname

The uname command can be used to view your current distribution codename, release number, and even the version of Linux you are using. To use uname, write the command followed by the “a” command line switch.

Using the “a” command line switch prints out all information, so it’s best to use this instead of all other options.

uname -a

14. man

The man command lets you view the instruction manual of any program. To take a look at the manual, run the man command followed by the name of the program. For example, to view the manual of cat, run:

man cat

15. df

Df is a way to easily view how much space is taken up on the file system(s) on Linux. To use it, write the dfcommand.

df

To make df more easily readable, use the “h” command line switch. This puts the output in “human readable” mode.

df -h

16. du

Need to view the space that a directory on your system is taking up? Make use of the du command. For example, to see how big your /home/ folder is, do:

du ~/

To make the du output more readable, try the ‘hr” command-line switch. This will put the output in “human readable” mode.

du ~/ -hr

17. whereis

With whereis, it’s possible to track down the exact location of an item in the command-line. For example, to find the location of the Firefox binary on your Linux system, run:

whereis firefox

18. locate

Searching  for files, programs and folders on the Linux command-line is made easy with locate. To use it, just write out the locate command, followed by a search term.

locate search-term

19. grep

With the grep command, it’s possible to search for a pattern. A good example use of the grep command is to use it to filter out a specific line of text in a file.

Understand that grep isn’t a command that should ever be run by itself. Instead, it must be combined, like so:

cat text-file.txt | grep 'search term'

Essentially, to use grep to search for patterns, remember this formula:

command command-operations | grep 'search term'

20. ps

To view current running processes directly from the Linux terminal, make use of the ps command.

ps

Need a more full, detailed report of processes? Run ps with aux.

ps aux

21. kill

Sometimes, you need to kill a problem program. To do this, you’ll need to take advantage of the kill command. For example, to close Firefox, do the following.

First, use pidof to find the process number for Firefox.

pidof

Then, kill it with the kill command.

kill process-id-number

Still won’t close? Use the “9” command-line switch.

kill -9 process-id-number

22. killall

Using the killall command, it’s possible to end all instances of a running program. To use it, run the killall command followed by the name of a program. For example, to kill all running Firefox processes, do:

killall firefox

23. curl

Need to download a file from the internet through the Linux terminal? Use curl! To start a download, write the curl command followed by the file’s URL, the symbol and the location you’d like to save it. For example:

curl https://www.download.com/file.zip > ~/Downloads/file.zip

24. free

Running out of memory? Check your swap space and free RAM space with the free command.

free

25. chmod

With chmod, it’s possible to update the permissions of a file or folder.

To update the permissions of a file so everyone on the PC can read, write and execute it, do:

chmod +rwx /location/of/file-or/folder/

To update the permissions so only the owner has access, try:

chmod +rw

To update permissions for a specific group or world on the Linux system, run:

chmod +rx

Conclusion

The Linux command-line has endless actions and operations to know, and even after getting through this list, you’ll still have a lot more to learn. That said, this list is sure to help beef up your command-line knowledge. Besides, everyone has to start somewhere!

Cheap MLB Jerseys maillot de foot pas cher www.fotballdrakter.org