You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can see the started ProcessGroup consisting of two processors.
212
+
You can see the started ProcessGroup consisting of three processors.
213
213
The first one - `InvokeHTTP`, fetches the CSV file from the Internet and puts it into the queue of the next processor.
214
-
The second processor - `PublishKafkaRecord_2_6`, parses the CSV file, converts it to JSON records and writes them out into Kafka.
214
+
The second processor - `SplitRecords`, takes the single FlowFile (NiFi Record) which contains all CSV records and splits it into chunks of 2000 records, which are then separately put into the queue of the next processor.
215
+
The third one - `PublishKafkaRecord`, parses the CSV chunk, converts it to JSON records and writes them out into Kafka.
215
216
216
217
Double-click on the `InvokeHTTP` processor to show the processor details.
The statistics show that Druid ingested `5074` records during the last minute and has ingested 3 million records already.
255
-
All entries have been consumed successfully, indicated by having no `processWithError`, `thrownAway` or `unparseable` records.
255
+
The statistics show that Druid ingested `13279` records per second within the last minute and has ingested around 600,000 records already.
256
+
All entries have been consumed successfully, indicated by having no `processWithError`, `thrownAway` or `unparseable` records in the output of the `View raw`
0 commit comments