Back

Four Ways of Caching Node.JS APIs for High Performance

Four Ways of Caching Node.JS APIs for High Performance

For an API to be effective, it needs high performance and low latency, which means users receive responses quickly. Otherwise, it can negatively impact the user experience of the applications relying on the API. That is why caching is important to know to optimize your Node.js API. This technique involves storing frequently accessed data in a temporary storage location, drastically improving response time. This tutorial discusses four ways to cache Node.js APIs for high performance.

There are four techniques that we will discuss:

  • in-memory caching
  • distributed caching
  • content delivery networks (CDNs)
  • the cache-aside pattern

Let’s get started!

In-Memory Caching

In-memory caching is a high-performance technique that involves storing frequently accessed data directly in the application’s memory space. This allows for rapid data retrieval without the need to query the database or external services on every request. In Node.js, two popular choices for in-memory caching are Redis and Memcached. in-memory

Example of In-Memory Caching

Below is an example of how to cache using Redis. We’ll be creating a simple get and set operation:

// set up Redis client
const redis = require("redis");
const client = redis.createClient();

// Listen for connect event
client.on("connect", function () {
  console.log("Connected to Redis");
});

// creating a simple set and get example
client.set("key", "value", (err, reply) => {
  if (err) {
    console.error("Error setting key:", err);
    return;
  }
  console.log(reply); // Output: OK

  // retrieve the data from cache
  client.get("key", (err, reply) => {
    if (err) {
      console.error("Error getting key:", err);
      return;
    }
    console.log(reply); // Output: value

    // Close the connection
    client.quit();
  });
});

In the code above, we first import the Redis module and create a Redis client. This client connects to the Redis server, which is running on 127.0.0.1 (localhost) and port 6379 by default. We then listen for a connect event to ensure the connection is successful before proceeding. After establishing the connection, we perform a set operation to store a key-value pair (key and value) in the Redis server.

This operation is asynchronous, and we use a callback function to handle the response. If the operation is successful, we log the response (OK) and then perform a get operation to retrieve the value associated with the key we just set. This operation is also asynchronous, and we log the retrieved value (value) in the callback. Finally, we close the connection to the Redis server using client.quit() to free up resources.

This method is particularly beneficial for applications that require quick access to frequently used data. The primary advantage of in-memory caching is its speed, as accessing data from RAM is significantly faster than retrieving it from a database or an external API.

Distributed Caching

Distributed cache data involves storing data across multiple nodes. It works by storing data in a centralized location that is accessible by all nodes in the application. When a request is made, the application first checks the cache to see if the requested data is available. If the data is not in the cache, the application fetches it from the original source and stores it in the cache for future requests. distributed caching This allows scalability as distributed caching allows you to scale by adding more scale to your infrastructure, ensuring the cache can handle increased loads and demands. Unlike in-memory caching, where data is stored locally within the application’s memory, distributed cache distributes the cache across a network of nodes.

Aside from scalability, distributed caching is beneficial for high throughput. With cache data distributed across nodes, the system can simultaneously handle a high volume of requests, leading to improved throughput. Another benefit of using distributed caching is availability. Even if one node goes down, the data can still be retrieved from other nodes, ensuring high availability and fault tolerance.

Example of Using Distributed Caching

Below is an example of using distributed caching using Redis:

const express = require("express");
const redis = require("redis");
const axios = require("axios");

const app = express();
const port = process.env.PORT || 3000;

// Create a Redis client
const redisClient = redis.createClient({
  host: "your_redis_host",
  port: 6379,
  password: "your_redis_password",
});

// Connect to Redis
redisClient.on("error", (error) =>
  console.error(`Redis client to server error: ${error}`),
);

// Middleware to check cache before fetching data
app.use(async (req, res, next) => {
  const key = req.originalUrl; // Use the request URL as the cache key
  const cachedData = await redisClient.get(key);

  if (cachedData) {
    // If data is in cache, send it and return
    res.send(JSON.parse(cachedData));
  } else {
    // If data is not in cache, fetch it and store in cache
    try {
      const data = await fetchDataFromAPI(req.originalUrl);
      redisClient.set(key, JSON.stringify(data), "EX", 3600); // Cache for 1 hour
      res.send(data);
    } catch (error) {
      res.status(500).send({ error: "Failed to fetch data" });
    }
  }
});

// Function to fetch data from an API
async function fetchDataFromAPI(url) {
  const response = await axios.get(url);
  return response.data;
}

app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

In the code above, we’re implementing distributed caching in a Node.js application using Redis. First, it checks if the requested data is in the Redis cache before fetching it from the source. If the data is not in the cache (a “cache miss”), it fetches the data, stores it in Redis for future requests, and then sends it to the client.

This process significantly reduces the load on the primary data source and improves the application’s responsiveness. The use of Redis for distributed caching ensures that data is consistently available across all instances of the application, thereby improving performance and consistency.

Content Delivery Networks (CDNs)

A content delivery network(CDN) is a network of distributed servers strategically placed at multiple locations worldwide, making web content more efficient for users. They are usually useful for caching and serving static assets such as images and JavaScript files. content delivery network CDNs are used to reduce latency because they serve content from servers that are physically closer to users. This helps accelerate content delivery because they can deliver the content quickly without having to retrieve it from the origin server for every request. This ultimately improves the overall user experience and reduces the load on your original server.

Example of Using CDNs for Caching

Below is an example of using a CDN for caching:

app.use("/images", express.static("images"));

// This middleware configuration tells Express to serve static files
// from the "images" directory when the URL starts with "/images".

const CDN_URL = "https://cdn.example.com";

// Redirect requests for images to the CDN
app.get("/images/profile.jpg", (req, res) => {
  res.redirect(CDN_URL + req.path);
});

In the code above, we set the application to serve static files from a directory called “images” and redirect requests for a specific file called profile.jpg to a Content Delivery Network (CDN).

The express.static method is used to serve static files from the “images” directory. We also define a middleware function to handle GET requests for profile.jpg in the images directory. This middleware function uses res.redirect to redirect requests for profile.jpg to the same file on a CDN.

Cache-Aside Pattern

The Cache-Aside pattern is a caching strategy used in Node.js applications to improve performance by temporarily storing frequently accessed data in a cache. This pattern is particularly effective for data that doesn’t change often and is expensive to fetch from the primary data store, such as a database. cache aside The idea is to first check the cache for the requested data. If the data is found in the cache (a cache hit), it is returned immediately. If the data is not found in the cache (a cache miss), the application fetches the data from the primary data store, stores it in the cache for future requests, and then returns it to the client. This approach reduces the load on the primary data store and speeds up data retrieval for frequently accessed data.

Example of Using Cache-Aside Pattern for Caching

Below is an example of using the Cache-Aside pattern for caching:

const express = require("express");
const fetch = require("node-fetch");
const NodeCache = require("node-cache");

// Initialize the cache with a default time-to-live (TTL) of 600 seconds
const myCache = new NodeCache({ stdTTL: 600 });

// Function to fetch data from an external API
async function getPosts() {
  const response = await fetch("https://jsonplaceholder.typicode.com/posts");
  if (!response.ok) {
    throw new Error(response.statusText);
  }
  return await response.json();
}

const app = express();

// Route to fetch posts
app.get("/posts", async (req, res) => {
  try {
    // Attempt to retrieve posts from the cache
    let posts = myCache.get("allPosts");

    // If posts are not in the cache, fetch them from the API and store them in the cache
    if (posts == null) {
      posts = await getPosts();
      myCache.set("allPosts", posts, 300); // Cache for 300 seconds
    }

    res.status(200).send(posts);
  } catch (err) {
    console.log(err);
    res.sendStatus(500);
  }
});

const port = 3000;
app.listen(port, () => {
  console.log(`Server listening on http://localhost:${port}`);
});

In the code above, we’re implementing the Cache-Aside pattern in a Node.js application to enhance performance by caching data from an external API. First, we import the necessary modules: express for setting up the server, node-fetch for making HTTP requests, and node-cache for caching data. We then initialize node-cache with a default time-to-live (TTL) of 600 seconds for each cache entry.

The getPosts function fetches data from an external API using node-fetch. It checks the response status and throws an error if the request fails. Otherwise, it returns the JSON data from the response.

The application is set up with a route /posts that handles GET requests. When a request is made to this route, the application first attempts to retrieve the posts from the cache using myCache.get(allPosts').

If the posts are not found in the cache (a cache miss), the application fetches the posts from the external API using the getPosts function, stores them in the cache with a TTL of 300 seconds, and then sends the posts as the response. If the posts are found in the cache (a cache hit), they are immediately sent as the response.

Conclusion

Caching is a pivotal strategy for enhancing the performance and efficiency of Node.js APIs.Using techniques such as in-memory caching, distributed caching, Content Delivery Networks (CDNs), and the Cache-Aside design, developers can substantially improve the responsiveness and scalability of their applications. These caching techniques not only reduce latency and server load but also help to improve user experience and overall system reliability.

Scale Seamlessly with OpenReplay Cloud

Maximize front-end efficiency with OpenReplay Cloud: Session replay, performance monitoring and issue resolution, all with the simplicity of a cloud-based service.

OpenReplay