![]() ![]() In the root of the project alongside the “subscriber” and “publisher” folders, create a file called docker-compose.yml and add this configuration: version: '3' Zookeeper deserves a post all of its own, and because we only need one node in this tutorial I won’t be going in-depth on it here. Zookeeper is a service that is used to synchronize Kafka nodes within a cluster. The docker-compose file is where the publisher, subscriber, Kafka, and Zookeeper services will be tied together. The docker-compose file for the Kafka stack The command that starts the container uses the wait-for-it.js file rather than the index. ![]() The subscriber’s Dockerfile is the same as the publisher’s, with the one difference noted above. ![]() Last, create the Dockerfile in the “subscriber” folder with the following snippet: FROM node:12-alpine Each second, it will check whether the topic exists, and when Kafka has started, and the topic is finally created, the subscriber will start. This file will be used in the Docker container to ensure that the consumer doesn’t attempt to consume messages from the topic before the topic has been created. Messages: [`$) Ĭonsole.log('Waiting for Kafka topic to be created') ![]() Inside of the “publisher” folder, add a new file called index.js with the following contents: const kafka = require('kafka-node') For simplicity, the service will generate a simple message at an interval of five seconds. The publisher service will be the one that generates messages that will be published to a Kafka topic. Start by creating a project directory with two folders inside it named “subscriber” and “publisher.” These folders will contain the application code, supporting Node files, and Dockerfiles that will be needed to build the apps that will communicate with Kafka. Let’s get going! Build the publisher service in Node for Kafka with Docker You can use docker-compose to run them all at once and stop them all when you’re ready.
0 Comments
Leave a Reply. |