Apache Kafka vs Databases (SQL & NoSQL) - The Confusion Every Developer Faces (Part 2)

πŸ‘¨πŸ»β€πŸ’»πŸ§­Apache Kafka vs Databases (SQL & NoSQL) – The Confusion Every Developer Faces – (Part 2)πŸŒ€πŸ€”β±οΈ

()

If you’ve just finished reading Part 1 of this series, you now know what Kafka is, how it works, and you’ve even built a small producer and consumer in Node.js. Great progress.

But here’s where most developers hit a wall.

They look at Kafka and think – “Wait, this stores data. My database also stores data. So… which one do I actually use? Can Kafka replace my database? Do I even need both?”

I had the exact same confusion. And honestly, it’s one of the most common questions developers ask when they first encounter Kafka. So in this blog, we’re going to settle it once and for all – with real comparisons, practical code, and examples that actually make sense.

Let’s clear the confusion.


First – They Are Not the Same Thing (Apache Kafka vs Databases)

Before we compare anything, let’s get this out of the way clearly:

Kafka is not a database. A database is not Kafka. They solve completely different problems.

Think of it this way.

A database is like a filing cabinet. You put documents in, you take them out, you update them, you delete them. The cabinet stores whatever the current state of things is. If you want to know a user’s email address right now – you open the cabinet and look it up.

Kafka is more like a conveyor belt in a factory. Things move across it in real time. Different stations (services) pick up what they need as it passes by. The conveyor belt doesn’t store the final product – it moves it from one place to another, and different workers react to what they see.

One is about storing state. The other is about moving events.

Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases

Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases | Apache Kafka vs Databases


The Core Difference – In One Table

Table (Comparison Feature): Database Vs Apache Kafka

FeatureDatabase (SQL / NoSQL)Apache Kafka
Main PurposeStore and query data permanentlyStream and transport events in real time
Data ModelTables, Documents, Key-Value pairsAppend-only event log
CRUD SupportFull – Create, Read, Update, DeleteOnly Append (no true update or delete)
QueryingSQL queries, filters, joinsNo querying – only consuming
Data LifetimeForever, until you delete itTemporary – based on retention policy
Best ForUser profiles, orders, productsNotifications, real-time sync, event triggers
SpeedFast reads and writesExtremely high throughput
IndexesYes – fast lookupsNo indexes
ExamplesMySQL, PostgreSQL, MongoDBApache Kafka, RabbitMQ

The Biggest Mistake Developers Make

A lot of developers – especially when they first discover Kafka – start thinking:

“Should I replace my database with Kafka?”

The answer is almost always no.

Kafka and databases are not competitors. They are teammates. In production systems, companies use both together. Here is a simple example of how they work side by side:

User places an order on your website
↓
Save the order in PostgreSQL (permanent storage)
↓
Send an event to Kafka topic "orders"
↓
Multiple services react to that event:
β†’ Email Service sends a confirmation email
β†’ Inventory Service reduces stock in MongoDB
β†’ Analytics Service updates a real-time dashboard

The database handles what the current state is. Kafka handles what just happened and who needs to know about it.

That is the relationship. Simple, clean, and powerful.


Can We Do CRUD in Kafka?

This is where it gets interesting.

In a traditional database, CRUD stands for Create, Read, Update, Delete. We do these operations every day – inserting a user, fetching their profile, updating their email, deleting their account.

Kafka Topic "users"
─────────────────────────────────────────
Offset 0 β†’ { id: "1", name: "Ali", action: "created" }
Offset 1 β†’ { id: "2", name: "Sara", action: "created" }
Offset 2 β†’ { id: "3", name: "Ahmed", action: "created" }
Offset 3 β†’ { id: "4", name: "Rames Kumawat", action: "created" }
Offset 4 β†’ { id: "5", name: "Ravi", action: "created" }
…
Offset 47 β†’ { id: "48", name: "John", action: "created" } ← this one
…
Offset 99 β†’ { id: "100", name: "Zara", action: "created" }

Kafka does not work like that. Kafka only supports one operationAppend. We write a message to a topic and it stays there until the retention period expires. We cannot go back and update message number 47. We cannot delete a specific message.

But here is the clever part – we can simulate CRUD using events:

CRUD OperationDatabaseKafka Equivalent
CreateINSERT INTO usersProduce a USER_CREATED event
ReadSELECT * FROM usersConsume messages from the topic
UpdateUPDATE users SET email=…Produce a USER_UPDATED event with new data
DeleteDELETE FROM users WHERE id=…Produce a tombstone message (null value)

The last one is interesting. A tombstone in Kafka is simply a message with the same key but a null value. It signals to consumers that this record has been logically deleted. It is how Kafka Compaction works under the hood.


CRUD in Kafka – Terminal & Node.js

Let’s get practical. In this section, we are going to do full CRUD operations in Kafka – first directly from the terminal so we can see exactly what is happening inside Kafka, and then with Node.js code in the same project to verify the correct final result.

Everything happens in one project folder. No switching. No confusion.


Step 1 – Install Prerequisites

Make sure you have these installed on your machine before we start:

Once Docker Desktop is installed, open it and make sure it is running in the background before you continue.

kafka-starter-8-docker-app-open

Step 2 – Create a Project Folder

Create a new folder for this project. We can do it manually or via terminal:

mkdir kafka-crud-practice
cd kafka-crud-practice
kafka-vs-database-1-create-folder-and-enter-in-it

Open this folder in your code editor. I am using IntelliJ IDEA – you can use VS Code or any editor you prefer.

idea .
kafka-vs-database-2-project-open-in-intellij-idea-idea
kafka-starter-3-project-start-window
kafka-vs-database-3_1-project-opened-in-intellij-idea

Now install the Node.js dependencies we will need later in this same project:

npm init -y
npm install kafkajs uuid
kafka-vs-database-3_2-npm-init-npm-kafkajs-uuid-installation-in-terminal

Step 3 – Create the Docker Compose File

Inside your project folder, create a new file and name it exactly:

docker-compose.yml
kafka-starter-6-create-new-file

Type the filename docker-compose.yml and hit Enter:

kafka-starter-7-yaml-file-create

Now paste this code into it:

version: '3'
services:
  kafka:
    image: confluentinc/cp-kafka:latest
    ports:
      - "9092:9092"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      CLUSTER_ID: "MkU3OEVBNTcwNTJENDM2Qk"

Save the file.

kafka-starter-10-yaml-file-created

Step 4 – Start Kafka with Docker

Open your terminal inside the project folder:

kafka-vs-database-4-open-terminal-in-project

and run:

docker-compose up -d
kafka-vs-database-5-run-docker-up-command-in-terminal

The first time we run this, Docker will download the Kafka image – this takes a couple of minutes depending on your internet speed. After that it starts instantly every time.

kafka-starter-9_1-docker-compose-up

When it is done, you will see something like this:

kafka-vs-database-6-docker-up-running-in-terminal
βœ” Network kafka-crud-practice_default   Created
βœ” Container kafka-crud-practice-kafka-1 Started
kafka-vs-database-7 -docker-up-done

To confirm Kafka is running, run:

docker ps
kafka-vs-database-8-docker-ps-command-terminal-showing-result

You will see output like this:

CONTAINER ID    IMAGE                          COMMAND                  
6f6a3bf92870 confluentinc/cp-kafka:latest "/etc/confluent/dock…"

CREATED STATUS PORTS
5 minutes ago Up 5 minutes 0.0.0.0:9092->9092/tcp

NAMES
kafka-crud-practice-kafka-1

or we can also open Docker Desktop App to check how container is running and what is the Container ID:

kafka-vs-database-9-docker-icon-windows-running-applications
kafka-vs-database-10-docker-application-interface
kafka-vs-database-11-docker-application-interface-kafka-crud-practice-kakfa-1

Copy that Container ID – we will need it in every command below. In my case it is 6f6a3bf92870 – yours will be different.


Step 5 – Create the users Topic

Before we can produce or consume any messages, the topic must exist. Run this command – replace <container_id> with your actual container ID:

docker exec -it <container_id> kafka-topics \
--create \
--topic users \
--bootstrap-server localhost:9092 \
--partitions 3 \
--replication-factor 1

You should see:

Created topic users.
kafka-vs-database-12-docker-create-topic-command-run-in-terminal

Now we are fully set up. Let’s run the CRUD operations.


1. CRUD in Terminal

CREATE – Write New Messages

Open the new terminal and run kafka console producer below command:

docker exec -it <container_id> kafka-console-producer \
  --bootstrap-server localhost:9092 \
  --topic users
kafka-vs-database-13-docker-kafka-console-producer-users-initiate

Once the > prompt appears, type each message below and hit Enter after each one:

{"id": "1", "name": "Ali Ahmad", "email": "ali@gmail.com", "action": "created"}
{"id": "2", "name": "Ramesh Kumawat", "email": "ramesh@gmail.com", "action": "created"}
{"id": "3", "name": "Morghan Boston", "email": "morghan@gmail.com", "action": "created"}
kafka-vs-database-14-docker-kafka-console-producer-users-enter-values

Hit Ctrl + C to exit the producer when done.


READ – See All Messages

Open a new terminal tab (keep the same folder) and run:

docker exec -it <container_id> kafka-console-consumer \
  --bootstrap-server localhost:9092 \
  --topic users \
  --from-beginning
kafka-vs-database-15-docker-kafka-console-consumer-read-values

You will see all three messages printed in your terminal. Leave this consumer terminal open – it will keep listening and print any new messages as they arrive in real time.


UPDATE – Send a New Event with the Same Key

Kafka does not let you edit a message that has already been written. Instead, we send a new message with the same key – and the latest one always wins when consumers build a snapshot.

Run this command:

docker exec -it <container_id> kafka-console-producer \
  --bootstrap-server localhost:9092 \
  --topic users \
  --reader-property "parse.key=true" \
  --reader-property "key.separator=:"
kafka-vs-database-16-docker-kafka-console-producer-update-intiate

Type this – the format is key:value, separated by a colon:

1:{"id": "1", "name": "Ali Khan", "email": "ali@newemail.com", "action": "updated"}
kafka-vs-database-17-docker-kafka-console-producer-update-one-value

Hit Ctrl + C to exit. Now switch to your consumer terminal – you will see the updated event appear immediately as a new line.

kafka-vs-database-18-docker-kafka-console-consumer-updated-value-showing

DELETE – Send a Tombstone Message

A tombstone is a message with the same key but a null value. It tells Kafka and all consumers that this record has been logically deleted.

docker exec -it <container_id> kafka-console-producer \
  --bootstrap-server localhost:9092 \
  --topic users \
  --reader-property "parse.key=true" \
  --reader-property "key.separator=:" \
  --reader-property "null.marker=NULL"
kafka-vs-database-19-docker-kafka-console-producer-delete-initiate

Type this:

2:NULL
kafka-vs-database-20-docker-kafka-console-producer-update-2-null

Ramesh Kumawat is now logically deleted. Switch to your consumer terminal – you will see the tombstone appear as a new line for key 2.

Consumer terminal after all operations:

kafka-vs-database-21-docker-kafka-console-consumer-view-after-delete-producer

2. Wait – Why is Old Data Still Showing? πŸ€”

Okay so we just ran all the CRUD commands. We updated Ali Ahmad to Ali Khan. We deleted Ramesh with a tombstone. And then we ran the consumer to check the final result.

And we saw something like this:

null
{"id": "1", "name": "Ali Ahmad", "email": "ali@gmail.com", "action": "created"}
{"id": "2", "name": "Ramesh Kumawat", "email": "ramesh@gmail.com", "action": "created"}
{"id": "3", "name": "Morghan Boston", "email": "morghan@gmail.com", "action": "created"}
{"id": "1", "name": "Ali Khan", "email": "ali@newemail.com", "action": "updated"}
kafka-vs-database-21-docker-kafka-console-consumer-view-after-delete-producer

And we first reaction was probably –

"Wait. Ramesh is still showing even after I deleted him. And Ali Ahmad is still there even after I updated him to Ali Khan. What is going on? Did my commands not work?"

I had the exact same confusion when I first ran this. And honestly it makes complete sense to be confused here – because this is the part where Kafka behaves very differently from a database.

Let me explain what is actually happening.


Kafka is Not a Database – This is the Proof

When we run kafka-console-consumer –from-beginning, it shows us the raw event log – every single message ever written to that topic, in the exact order they were written. No filtering. No latest-value logic. Just the full history from start to finish.

Think of it like a notebook written in pen. Every event is a line. We cannot erase old lines. We can only add new lines after them. The console consumer reads every line from top to bottom – old events and new events both.

So what we actually saw in your terminal was correct:

null               ← tombstone for Ramesh (key 2) - written last, shown first
Ali Ahmad created ← event 1 - raw history
Ramesh created ← event 2 - raw history
Morghan created ← event 3 - raw history
Ali Khan updated ← event 4 - raw history

All 5 events are there. Nothing is wrong. This is Kafka doing exactly what it is supposed to do – store every event faithfully and completely.


So Who Applies the “Latest Value Wins” Logic?

This is the key thing to understand.

Kafka does not apply it. Our application code does.

The console consumer is a raw debugging tool – it just shows everything. But in a real application, Our code reads all the events and builds a final snapshot where only the latest value for each key survives.

Here is the exact mapping of what happened with our data:

Key "1" - Ali Ahmad
β”œβ”€β”€ Offset 1 β†’ Ali Ahmad created (old event)
└── Offset 4 β†’ Ali Khan updated (latest) ← this one wins

Key "2" - Ramesh
β”œβ”€β”€ Offset 2 β†’ Ramesh created (old event)
└── Offset 0 β†’ NULL tombstone (latest) ← deleted

Key "3" - Morghan
└── Offset 3 β†’ Morghan created (only event) ← unchanged

So our actual final state after all operations is:

id "1" β†’ Ali Khan    | ali@newemail.com  | updated    βœ…
id "2" β†’ DELETED | tombstone | βœ…
id "3" β†’ Morghan | morghan@gmail.com | created βœ…

Our commands worked perfectly. The console consumer just does not show it that way – it shows the full raw history instead.


The Simple Difference

A database shows you the current state by default. Kafka shows you the full history by default. That difference is intentional – and it is actually what makes Kafka powerful.

ToolWhat It Shows
kafka-console-consumerEvery raw event ever written – full history log
Your application codeFinal snapshot – latest state per key only

Why did null show first?

You might have noticed the null tombstone appeared at the top of the consumer output, not at the bottom where we sent it.

This happens because Kafka uses keys to decide which partition a message goes to. The tombstone for key 2 landed in a different partition than some of the other messages. When you read –from-beginning, Kafka reads across all partitions and the order can look slightly different from the order we sent them in.

This is completely normal. Within a single partition, order is always guaranteed. Across multiple partitions, the order depends on how Kafka distributes the messages.

Nothing is broken. Everything is working exactly as designed.



3. CRUD With Node.js – Verify the Final Result

Now let’s prove it with code. We are going to use the same project folder and the same users topic from the terminal section above. No new folder needed.

The Node.js code will read all the events from the users topic and apply the “latest key wins” logic – and show you the correct final snapshot.


Create kafka-crud.js

Inside your kafka-crud-practice folder, create a new file called kafka-crud.js and paste this code:

const { Kafka, Partitioners } = require('kafkajs');
const { v4: uuidv4 } = require('uuid');

const kafka = new Kafka({
    clientId: 'crud-app',
    brokers: ['localhost:9092'],
    retry: {
        initialRetryTime: 300,
        retries: 5
    }
});

// ── Single producer instance β€” connect once, use everywhere ──
const producer = kafka.producer({
    createPartitioner: Partitioners.LegacyPartitioner
});

async function createUser(userData) {
    const user = {
        id: uuidv4(),
        ...userData,
        action: 'USER_CREATED',
        timestamp: new Date().toISOString()
    };
    await producer.send({
        topic: 'users',
        messages: [{ key: user.id, value: JSON.stringify(user) }]
    });
    console.log(`βœ… CREATE β†’ ${user.name} | ${user.email} | ID: ${user.id.slice(0, 8)}...`);
    return user;
}

async function updateUser(userId, updatedData) {
    const update = { id: userId, ...updatedData, action: 'USER_UPDATED', timestamp: new Date().toISOString() };
    await producer.send({
        topic: 'users',
        messages: [{ key: userId, value: JSON.stringify(update) }]
    });
    console.log(`✏️  UPDATE β†’ ID: ${userId.slice(0, 8)}... | New Email: ${updatedData.email}`);
}

async function deleteUser(userId) {
    await producer.send({
        topic: 'users',
        messages: [{ key: userId, value: null }]
    });
    console.log(`πŸ—‘οΈ  DELETE β†’ ID: ${userId.slice(0, 8)}... tombstone sent`);
}

async function readAllUsers() {
    const consumer = kafka.consumer({
        groupId: `read-group-${Date.now()}`,
        sessionTimeout: 30000,
        heartbeatInterval: 3000,
        maxWaitTimeInMs: 500,
    });

    await consumer.connect();
    await consumer.subscribe({ topic: 'users', fromBeginning: true });

    const users = {};
    let hasJoined = false;
    let lastMsgTime = null;
    let resolved = false;

    consumer.on(consumer.events.GROUP_JOIN, () => {
        hasJoined = true;
        lastMsgTime = Date.now();
    });

    await new Promise((resolve, reject) => {
        consumer.run({
            eachMessage: async ({ message }) => {
                const key = message.key?.toString();
                const value = message.value?.toString();
                if (!value || value === 'null') {
                    delete users[key];
                } else {
                    users[key] = JSON.parse(value);
                }
                lastMsgTime = Date.now();
            }
        }).catch(reject);

        const interval = setInterval(() => {
            if (resolved) return;
            if (!hasJoined) return;
            if (Date.now() - lastMsgTime > 2000) {
                resolved = true;
                clearInterval(interval);
                resolve();
            }
        }, 300);

        setTimeout(() => {
            if (!resolved) {
                resolved = true;
                clearInterval(interval);
                resolve();
            }
        }, 30000);
    });

    await consumer.disconnect();

    const allUsers = Object.values(users);
    console.log(`\nπŸ“‹ READ β€” Final Snapshot (${allUsers.length} user${allUsers.length !== 1 ? 's' : ''}):`);
    console.log('─'.repeat(55));
    if (allUsers.length === 0) {
        console.log('   No users found.');
    } else {
        allUsers.forEach((u, i) => {
            console.log(`   ${i + 1}. Name:   ${u.name}`);
            console.log(`      Email:  ${u.email}`);
            console.log(`      Action: ${u.action}`);
            if (i < allUsers.length - 1) console.log('');
        });
    }
    console.log('─'.repeat(55));
    return allUsers;
}

async function main() {
    console.log('\n=== Kafka CRUD β€” Node.js ===\n');

    await producer.connect();

    console.log('--- CREATE ---');
    const user1 = await createUser({ name: 'Ali Ahmad',  email: 'ali@gmail.com'  });
    const user2 = await createUser({ name: 'Sara Khan',  email: 'sara@gmail.com' });
    await createUser({ name: 'Ahmed Raza', email: 'ahmed@gmail.com' });

    await new Promise(r => setTimeout(r, 1000));

    console.log('\n--- UPDATE ---');
    await updateUser(user1.id, { name: 'Ali Khan', email: 'ali@newemail.com' });

    console.log('\n--- DELETE ---');
    await deleteUser(user2.id);

    await producer.disconnect();

    console.log('\n⏳ Waiting for Kafka to settle...');
    await new Promise(r => setTimeout(r, 2000));

    console.log('\n--- READ ---');
    await readAllUsers();

    console.log('\nβœ… Done!\n');
}

main().catch(console.error);
kafka-vs-database-22-nodejs-code-file-added-for-crud-operations

Run It

Make sure Kafka is still running (docker ps to check), then:

node kafka-crud.js

What You Will See

=== Kafka CRUD β€” Node.js ===

--- CREATE ---
βœ… CREATE β†’ Ali Ahmad | ali@gmail.com | ID: abc123...
βœ… CREATE β†’ Sara Khan | sara@gmail.com | ID: xyz456...
βœ… CREATE β†’ Ahmed Raza | ahmed@gmail.com | ID: pqr789...

--- UPDATE ---
✏️ UPDATE β†’ ID: abc123... | New Email: ali@newemail.com

--- DELETE ---
πŸ—‘οΈ DELETE β†’ ID: xyz456... tombstone sent

⏳ Waiting for Kafka to settle...

--- READ ---
πŸ“‹ READ β€” Final Snapshot (2 users):
───────────────────────────────────────────────────────
1. Name: Ali Khan
Email: ali@newemail.com
Action: USER_UPDATED

2. Name: Ahmed Raza
Email: ahmed@gmail.com
Action: USER_CREATED
───────────────────────────────────────────────────────

βœ… Done!

Sara Khan is gone – tombstone worked βœ… Ali shows updated name and new email βœ… Ahmed Raza is untouched βœ…

kafka-vs-database-23-nodejs-code-file-run-and-showing-result

This is the correct final state. This is what your application shows to users – not the raw Kafka log from the terminal.

The difference between what the terminal consumer showed us and what this code shows is the exact difference between Kafka’s raw event log and our application’s built snapshot. You have now seen both with your own eyes in the same project. That is a big deal.


The Golden Rule of Kafka

Kafka’s job – store every event faithfully. Full history. Nothing hidden. Your application’s job – read those events and build the current state.

This pattern is called Event Sourcing – used by Netflix, LinkedIn, and Uber in their production systems every single day. We just ran through the full cycle of it in our own terminal and in Node.js code.

That is not a small thing. Most developers read about this for weeks before it actually clicks. You just made it click by doing it. πŸŽ‰


Stop Kafka When Done

Run this command:

docker-compose down

Output:

βœ” Container kafka-crud-practice-kafka-1  Removed
βœ” Network kafka-crud-practice_default Removed

Your code and files stay untouched. Run docker-compose up -d anytime to start again.




When to Use What – Simple Decision Guide

I know decision tables can feel overly simplified, but this one is genuinely useful when you’re sitting in a meeting and someone asks “should we use Kafka or just hit the database?”

ScenarioUse DatabaseUse Kafka
Store a user profile permanentlyβœ…βŒ
Query users by email or nameβœ…βŒ
Send a welcome email when user registersβŒβœ…
Save an order with all its detailsβœ…βŒ
Notify inventory service when order is placedβŒβœ…
Show real-time updates in a dashboardβŒβœ…
Historical reports and analytics storageβœ…βŒ
Sync data between two microservicesβŒβœ…

The pattern is clear. Database = store it. Kafka = react to it.


Kafka vs SQL vs NoSQL – One More Level Deeper

Since we are on the topic, here is a quick breakdown of where SQL and NoSQL databases fit:

SQL (PostgreSQL, MySQL)NoSQL (MongoDB, DynamoDB)Kafka
StructureStrict schema, tablesFlexible, documentsEvent log
RelationshipsYes – joins, foreign keysLimitedNone
Best ForFinancial data, structured recordsUser data, flexible schemasReal-time events, messaging
ScalingVertical (mostly)HorizontalHorizontal (partitions)
TransactionsFull ACID supportPartialNo transactions

Each tool has a job. When we give every tool its right job, the system becomes clean, fast, and easy to maintain.


Key Takeaways

If we take nothing else from this blog, remember these five things:

  • Kafka is not a database replacement – it is a communication layer that sits between your services
  • Databases store current state – Kafka stores what happened and when it happened
  • CRUD in Kafka is simulated through events – create, update, and delete are all just different types of messages
  • Tombstones are how Kafka handles deletion – a null value on a key marks it as removed
  • The best architecture uses both – database for storage, Kafka for events


See you in Part 3. πŸš€

Questions about Kafka vs your database setup? Drop a comment below – I read every one and reply to most of them.

How useful was this post?

Click on a star to rate it!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

About the author

Software Developer, SaaS Founder & Product Engineer

Founder of PinLnk - a branded link, QR code, and analytics platform built for agencies, marketers, and modern businesses.

Founder of Sketch Profile - a custom software development firm delivering scalable web applications, SaaS platforms, and tailored business solutions for startups and enterprises.

10+ years of experience building high-performance digital products across PHP/Laravel, React, Node.js, AWS, and cloud infrastructure.

Specialized in SaaS architecture, marketing technology, automation systems, analytics platforms, and custom web development.

✨ Top Rated Freelancer | 5-Star Reviews | Verified Professional

Leave a Reply

Your email address will not be published. Required fields are marked *