编程知识 cdmana.com

Flink's sink: Cassandra 3

Welcome to visit mine GitHub

https://github.com/zq2599/blog_demos

Content : All original articles classified summary and supporting source code , involve Java、Docker、Kubernetes、DevOPS etc. ;

An overview of this article

This article is about 《Flink Of sink actual combat 》 The third in the series , The main content is experience Flink Official cassandra connector, The whole actual combat is shown in the figure below , Let's start with kafka Get string , Re execution wordcount operation , Then print and write the results at the same time cassandra:  Insert picture description here

Full series Links

  1. 《Flink Of sink One of the real battles : On 》
  2. 《Flink Of sink The second part of the actual battle :kafka》
  3. 《Flink Of sink The third part of the actual battle :cassandra3》
  4. 《Flink Of sink The fourth part of the actual battle : Customize 》

Software version

The software version information of this actual combat is as follows :

  1. cassandra:3.11.6
  2. kafka:2.4.0(scala:2.12)
  3. jdk:1.8.0_191
  4. flink:1.9.2
  5. maven:3.6.0
  6. flink The operating system :CentOS Linux release 7.7.1908
  7. cassandra The operating system :CentOS Linux release 7.7.1908
  8. IDEA:2018.3.5 (Ultimate Edition)

About cassandra

What is used this time cassandra It's a cluster of three clusters , Please refer to 《ansible Rapid deployment cassandra3 colony 》

Get ready cassandra Of keyspace And table

First create keyspace and table:

  1. <font color="blue">cqlsh</font> Sign in cassandra:
cqlsh 192.168.133.168
  1. establish keyspace(3 copy ):
CREATE KEYSPACE IF NOT EXISTS example
    WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'};
  1. Build table :
CREATE TABLE IF NOT EXISTS example.wordcount (
    word text,
    count bigint,
    PRIMARY KEY(word)
    );

Get ready kafka Of topic

  1. start-up kafka service ;
  2. Create a test001 Of topic, The reference command is as follows :
./kafka-topics.sh \
--create \
--bootstrap-server 127.0.0.1:9092 \
--replication-factor 1 \
--partitions 1 \
--topic test001
  1. Enter the send message session mode , The reference command is as follows :
./kafka-console-producer.sh \
--broker-list kafka:9092 \
--topic test001
  1. In conversation mode , Enter any string and enter , Will send a string message to broker;

Source download

If you don't want to write code , The source code of the whole series can be found in GitHub Download to , The address and link information is shown in the following table (https://github.com/zq2599/blog_demos):

name link remarks
Project home page https://github.com/zq2599/blog_demos The project is in progress. GitHub Home page on
git Warehouse address (https) https://github.com/zq2599/blog_demos.git The warehouse address of the source code of the project ,https agreement
git Warehouse address (ssh) git@github.com:zq2599/blog_demos.git The warehouse address of the source code of the project ,ssh agreement

This git Multiple folders in project , The application of this chapter in <font color="blue">flinksinkdemo</font> Under the folder , As shown in the red box below :  Insert picture description here

Two kinds of writing cassandra The way

flink Official connector Supports two ways to write cassandra:

  1. Tuple Type write : take Tuple Object to the specified SQL The parameters of the ;
  2. POJO Type write : adopt DataStax, take POJO Objects correspond to tables and fields in annotation configuration ;

Next, use these two methods respectively ;

Development (Tuple write in )

  1. 《Flink Of sink The second part of the actual battle :kafka》 Created in <font color="blue">flinksinkdemo</font> engineering , Continue to use ;
  2. stay pom.xml add casandra Of connector rely on :
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-connector-cassandra_2.11</artifactId>
  <version>1.10.0</version>
</dependency>
  1. Also add <font color="blue">flink-streaming-scala</font> rely on , Otherwise compile <font color="blue">CassandraSink.addSink</font> This code will fail :
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
  <version>${flink.version}</version>
  <scope>provided</scope>
</dependency>
  1. newly added CassandraTuple2Sink.java, This is it. Job class , From inside kafka Get string message , And turn it into Tuple2 Type of data set write cassandra, The key point of writing is Tuple Content and designation SQL Match of parameters in :
package com.bolingcavalry.addsink;

import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.cassandra.CassandraSink;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import java.util.Properties;


public class CassandraTuple2Sink {
    public static void main(String[] args) throws Exception {
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // Set parallelism 
        env.setParallelism(1);

        // Connect kafka Property object used 
        Properties properties = new Properties();
        //broker Address 
        properties.setProperty("bootstrap.servers", "192.168.50.43:9092");
        //zookeeper Address 
        properties.setProperty("zookeeper.connect", "192.168.50.43:2181");
        // Consumers' groupId
        properties.setProperty("group.id", "flink-connector");
        // Instantiation Consumer class 
        FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(
                "test001",
                new SimpleStringSchema(),
                properties
        );

        // Specify to start consumption from the latest location , It's like giving up historical news 
        flinkKafkaConsumer.setStartFromLatest();

        // adopt addSource Method to get DataSource
        DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);

        DataStream<Tuple2<String, Long>> result = dataStream
                .flatMap(new FlatMapFunction<String, Tuple2<String, Long>>() {
                             @Override
                             public void flatMap(String value, Collector<Tuple2<String, Long>> out) {
                                 String[] words = value.toLowerCase().split("\\s");

                                 for (String word : words) {
                                     //cassandra In the table of , Every word All primary keys , So it can't be empty 
                                     if (!word.isEmpty()) {
                                         out.collect(new Tuple2<String, Long>(word, 1L));
                                     }
                                 }
                             }
                         }
                )
                .keyBy(0)
                .timeWindow(Time.seconds(5))
                .sum(1);

        result.addSink(new PrintSinkFunction<>())
                .name("print Sink")
                .disableChaining();

        CassandraSink.addSink(result)
                .setQuery("INSERT INTO example.wordcount(word, count) values (?, ?);")
                .setHost("192.168.133.168")
                .build()
                .name("cassandra Sink")
                .disableChaining();

        env.execute("kafka-2.4 source, cassandra-3.11.6 sink, tuple2");
    }
}
  1. In the above code , from kafka Get data , Did word count Write to after processing cassandra, Be careful addSink A series of after methods API( Contains parameters for database connection ), This is a flink Officially recommended operation , In addition, in order to Flink web UI See clearly DAG situation , This call disableChaining The method cancelled operator chain, This line can be removed from the production environment ;
  2. After coding , perform <font color="blue">mvn clean package -U -DskipTests</font> structure , stay target Directory to get files <font color="blue">flinksinkdemo-1.0-SNAPSHOT.jar</font>;
  3. stay Flink Of web UI Upload <font color="blue">flinksinkdemo-1.0-SNAPSHOT.jar</font>, And specify the execution class , As shown in the red box below :  Insert picture description here
  4. After starting the mission DAG as follows :  Insert picture description here
  5. Go to the previously created send kafka Message's session mode window , Send a string "aaa bbb ccc aaa aaa aaa";
  6. see cassandra data , It's found that three new records have been added , The content is in line with expectations :  Insert picture description here
  7. see TaskManager Console output , There are Tuple2 Print result of dataset , and cassandra The consistency of :  Insert picture description here
  8. DAG On all the SubTask The number of records is also in line with expectations :  Insert picture description here

Development (POJO write in )

Next try POJO write in , That is, the data structure instances in the business logic are written cassandra, Do not need to specify SQL:

  1. Realization POJO Write to database , need datastax Library support , stay pom.xml Add the following dependencies :
<dependency>
  <groupId>com.datastax.cassandra</groupId>
  <artifactId>cassandra-driver-core</artifactId>
  <version>3.1.4</version>
  <classifier>shaded</classifier>
  <!-- Because the shaded JAR uses the original POM, you still need
                 to exclude this dependency explicitly: -->
  <exclusions>
    <exclusion>
	<groupId>io.netty</groupId>
	<artifactId>*</artifactId>
	</exclusion>
  </exclusions>
</dependency>
  1. Please pay attention to the <font color="blue">exclusions</font> node , rely on datastax When , According to the official guidance netty Related indirect dependence is excluded , Official address :https://docs.datastax.com/en/developer/java-driver/3.1/manual/shaded_jar/
  2. Create entity classes with database related annotations WordCount:
package com.bolingcavalry.addsink;

import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.Table;

@Table(keyspace = "example", name = "wordcount")
public class WordCount {

    @Column(name = "word")
    private String word = "";

    @Column(name = "count")
    private long count = 0;

    public WordCount() {
    }

    public WordCount(String word, long count) {
        this.setWord(word);
        this.setCount(count);
    }

    public String getWord() {
        return word;
    }

    public void setWord(String word) {
        this.word = word;
    }

    public long getCount() {
        return count;
    }

    public void setCount(long count) {
        this.count = count;
    }

    @Override
    public String toString() {
        return getWord() + " : " + getCount();
    }
}
  1. Then create the task class CassandraPojoSink:
package com.bolingcavalry.addsink;

import com.datastax.driver.mapping.Mapper;
import com.datastax.shaded.netty.util.Recycler;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.cassandra.CassandraSink;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;

import java.util.Properties;

public class CassandraPojoSink {
    public static void main(String[] args) throws Exception {
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // Set parallelism 
        env.setParallelism(1);

        // Connect kafka Property object used 
        Properties properties = new Properties();
        //broker Address 
        properties.setProperty("bootstrap.servers", "192.168.50.43:9092");
        //zookeeper Address 
        properties.setProperty("zookeeper.connect", "192.168.50.43:2181");
        // Consumers' groupId
        properties.setProperty("group.id", "flink-connector");
        // Instantiation Consumer class 
        FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(
                "test001",
                new SimpleStringSchema(),
                properties
        );

        // Specify to start consumption from the latest location , It's like giving up historical news 
        flinkKafkaConsumer.setStartFromLatest();

        // adopt addSource Method to get DataSource
        DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);

        DataStream<WordCount> result = dataStream
                .flatMap(new FlatMapFunction<String, WordCount>() {
                    @Override
                    public void flatMap(String s, Collector<WordCount> collector) throws Exception {
                        String[] words = s.toLowerCase().split("\\s");

                        for (String word : words) {
                            if (!word.isEmpty()) {
                                //cassandra In the table of , Every word All primary keys , So it can't be empty 
                                collector.collect(new WordCount(word, 1L));
                            }
                        }
                    }
                })
                .keyBy("word")
                .timeWindow(Time.seconds(5))
                .reduce(new ReduceFunction<WordCount>() {
                    @Override
                    public WordCount reduce(WordCount wordCount, WordCount t1) throws Exception {
                        return new WordCount(wordCount.getWord(), wordCount.getCount() + t1.getCount());
                    }
                });

        result.addSink(new PrintSinkFunction<>())
                .name("print Sink")
                .disableChaining();

        CassandraSink.addSink(result)
                .setHost("192.168.133.168")
                .setMapperOptions(() -> new Mapper.Option[] { Mapper.Option.saveNullFields(true) })
                .build()
                .name("cassandra Sink")
                .disableChaining();

        env.execute("kafka-2.4 source, cassandra-3.11.6 sink, pojo");
    }

}
  1. From the above code, we can see , And the one in front Tuple There is a big difference in the type of writing , In order to be ready POJO Data sets of type , except flatMap The anonymous class input parameter of is to be overridden , And write it reduce Method , And call setMapperOptions Set mapping rules ;
  2. After compiling and building , Upload jar To flink, And specify the task class as CassandraPojoSink:  Insert picture description here
  3. Clean up the data before , stay cassandra Of cqlsh On the implementation <font color="blue">TRUNCATE example.wordcount;</font>
  4. Send a string message to kafka:  Insert picture description here
  5. view the database , The findings are in line with expectations :

 Insert picture description here 10. DAG and SubTask The situation is as follows :  Insert picture description here

thus ,flink Write the result data of cassandra The actual battle of is finished , I hope I can give you some reference ;

Welcome to the official account : Xinchen, programmer

WeChat search 「 Xinchen, programmer 」, I'm Xinchen , Looking forward to traveling with you Java The world ... https://github.com/zq2599/blog_demos

版权声明
本文为[Programmer Xinchen]所创,转载请带上原文链接,感谢

Scroll to Top