Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XPath parser extension to allow parsing of JSON, MessagePack and Protocol-buffers #9277

Merged
merged 29 commits into from
Jul 1, 2021

Conversation

srebhan
Copy link
Member

@srebhan srebhan commented May 19, 2021

  • Updated associated README.md.
  • Wrote appropriate unit tests.

resolves #9075
related to #9197 and PR #9246

This PR renames the xml-parser to xpath and massively extend the functionality. Current configurations using xml will still work, while it is now possible to also parse JSON files using the XPath-syntax.
Additionally, we can generically parse msgpack data using the XPath-syntax without adding anything at compile-time.
Furthermore, we can now generically parse protocol-buffer data also using the XPath-syntax. To do so, in most cases the only required step is to import the protocolbuffer in the protocolbuffer_document.go file.

Some short test-cases are added to demonstrate the functionality.

@telegraf-tiger telegraf-tiger bot added feat Improvement on an existing feature such as adding a new setting/mode to an existing plugin new plugin plugin/parser 1. Request for new parser plugins 2. Issues/PRs that are related to parser plugins labels May 19, 2021
@srebhan
Copy link
Member Author

srebhan commented May 19, 2021

Please note, the documentation is not yet updated. This is by intention and will be fixed after a first round of reviews.

@mtnguru
Copy link
Contributor

mtnguru commented Jun 9, 2021

Is this extension ready to test with Sparkplug B messages? I have time available to give it a good try.

@srebhan
Copy link
Member Author

srebhan commented Jun 10, 2021

@mtnguru thanks for the offer. If you point me to the protocol-buffer of "Sparkplug B messages" I can add it so you can give it a try...

@mtnguru
Copy link
Contributor

mtnguru commented Jun 10, 2021

Here is a link to the cirrus github page - https://github.com/Cirrus-Link/Sparkplug/tree/master/sparkplug_b

Sparkplug B is used in the Groov EPIC controller

@srebhan
Copy link
Member Author

srebhan commented Jun 10, 2021

Hey @mtnguru. I added sparkplug_b for parsing. It was a bit more difficult that expected as I had to first create a golang implementation of the protobuf @https://github.com/eclipse/tahu, which seems to be the new location of the project.
Please note: This that I need to create a PR for DocLambda/tahu@goclient, but for testing this should be sufficient.

I used the following config for parsing a "device birth" dummy message

  [[inputs.file]]
    files = ["./tahu_msg.dat"]
    data_format = "protobuf"
    xpath_protobuf_type = "org.eclipse.tahu.protobuf.Payload"
    [[inputs.file.protobuf]]
      metric_selection = "metrics[not(template_value)]"
      metric_name = "concat('tahu_', substring-after(name, ' '))"
      timestamp = "timestamp"
      timestamp_format = "unix_ms"
      [inputs.file.protobuf.tags]
        name = "substring-after(name, ' ')"
      [inputs.file.protobuf.fields_int]
        type = "datatype"
      [inputs.file.protobuf.fields]
        value = "(int_value | long_value | float_value | double_value | boolean_value | string_value)"

    [[inputs.file.protobuf]]
      metric_selection = "metrics/template_value/metrics"
      metric_name = "concat('tahu_', name)"
      timestamp = "timestamp"
      timestamp_format = "unix_ms"
      [inputs.file.protobuf.tags]
        name = "substring-after(name, ' ')"
      [inputs.file.protobuf.fields_int]
        type = "datatype"
      [inputs.file.protobuf.fields]
        value = "(int_value | long_value | float_value | double_value | boolean_value | string_value)"

and got

tahu_Metric0,host=Hugin,name=Metric0 type=12i,value="hello device" 1623346682000000000
tahu_Metric1,host=Hugin,name=Metric1 type=11i,value="true" 1623346682000000000
tahu_Metric2,host=Hugin,name=Metric2 type=2i,value="16" 1623346682000000000
tahu_Metric3,host=Hugin,name=Metric3 type=11i,value="true" 1623346682000000000
tahu_RPMs,host=Hugin value="123",type=3i 1623346682000000000
tahu_AMPs,host=Hugin type=3i,value="456" 1623346682000000000

@srebhan
Copy link
Member Author

srebhan commented Jun 11, 2021

@mtnguru you can use the artifacts pubished by the tigerbot in the comment above. Please let me know if this works for you or in case you find any issues.

@mtnguru
Copy link
Contributor

mtnguru commented Jun 14, 2021

Hello Sven,

I'm very new to telegraf and go - However I can now compile telegraf and am starting to learn how to debug it. There's still a lot to learn.

A couple questions:

I'm trying to duplicate what you did - Where did you find the ./tahu_msg.dat file?

I'm using the mqtt_consumer input plugin instead of 'file'. Can I take your configuration file and replace the word 'file' with 'mqtt_consumer' and it will possibly work? I see an issue that the topic is not encoded with protobuf, only the payload is. Therefore they have to be recombined on output.

Thanks for looking at this, it's critical for my project and most of my focus right now. This is by far the best solution I've found.

James Sorensen

@srebhan
Copy link
Member Author

srebhan commented Jun 14, 2021

@mtnguru you can use the artifacts the tiger-bot creates for for you by clicking on the small black triangle in the comment here. No need to build telegraf yourself...

Where did you find the ./tahu_msg.dat file?

I created some dummy data using the python protocol buffer implementation with this script placed in tahu/sparkplug_b/stand_alone_examples/python/. I used the resulting device_birth.dat as ./tahu_msg.dat.

I'm using the mqtt_consumer input plugin instead of 'file'. Can I take your configuration file and replace the word 'file' with 'mqtt_consumer' and it will possibly work?

Yeah kind of. You should use and configure theMQTT plugin to get the raw protobuf data. Then you use this part for parsing:

    data_format = "protobuf"
    xpath_protobuf_type = "org.eclipse.tahu.protobuf.Payload"
    [[inputs.mqtt_consumer.protobuf]]
      metric_selection = "metrics[not(template_value)]"
      metric_name = "concat('tahu_', substring-after(name, ' '))"
      timestamp = "timestamp"
      timestamp_format = "unix_ms"
      [inputs.file.protobuf.tags]
        name = "substring-after(name, ' ')"
      [inputs.mqtt_consumer.protobuf.fields_int]
        type = "datatype"
      [inputs.mqtt_consumer.protobuf.fields]
        value = "(int_value | long_value | float_value | double_value | boolean_value | string_value)"

    [[inputs.mqtt_consumer.protobuf.protobuf]]
      metric_selection = "metrics/template_value/metrics"
      metric_name = "concat('tahu_', name)"
      timestamp = "timestamp"
      timestamp_format = "unix_ms"
      [inputs.mqtt_consumer.protobuf.protobuf.tags]
        name = "substring-after(name, ' ')"
      [inputs.mqtt_consumer.protobuf.fields_int]
        type = "datatype"
      [inputs.mqtt_consumer.protobuf.fields]
        value = "(int_value | long_value | float_value | double_value | boolean_value | string_value)"

.

@mtnguru
Copy link
Contributor

mtnguru commented Jun 16, 2021

Success! almost...... The protobuf parser is working with the mqtt_consumer plugin. However the Sparkplug specification makes it so MQTT packets are not atomic. The DBIRTH packet lists all the metric names and associates them with a numeric alias. Subsequent DDATA packets contain the numeric alias and do not have the full topic name. Therefore the input plugin has to save the names and their alias when they receive the DBIRTH packet, Upon receipt of a DDATA the alias is looked up and the associated name is appended the topic. The next 2 comments are examples of the DBIRTH and DDATA packets.

Here is the test output I'm getting from telegraf --test --debug

2021-06-15T14:51:05Z D! [parsers.protobuf::mqtt_consumer] XML document equivalent: "1716237686651029<is_null>false</is_null>Quality3<int_value>600</int_value><float_value>5401.963</float_value>1816237686651029<is_null>false</is_null>Quality3<int_value>600</int_value><float_value>23.298462</float_value>16237686651029<is_null>false</is_null>3<int_value>600</int_value>Quality<float_value>5401.963</float_value>19311623768665102"
2021-06-15T14:51:05Z D! [parsers.protobuf::mqtt_consumer] Number of selected metric nodes: 3

tahu_,host=saphira,topic=spBv1.0/cabin/DDATA/reactor1/epiclc type=9i,value="5401.963" 1623768665102000128
tahu_,host=saphira,topic=spBv1.0/cabin/DDATA/reactor1/epiclc type=9i,value="23.298462" 1623768665102000128
tahu_,host=saphira,topic=spBv1.0/cabin/DDATA/reactor1/epiclc type=9i,value="5401.963" 1623768665102000128

@srebhan
Copy link
Member Author

srebhan commented Jun 16, 2021

@mtnguru, the input plugin should not be bothered with this kind of "smart" :-/ behavior. What you can do is to use the starlark processor in stateful mode to do the lookup (see this example).

If you don't mind, please contact me on slack, so we can work out the starlark part and add your use-case it as an example.

@mtnguru
Copy link
Contributor

mtnguru commented Jun 16, 2021

Example of the DBIRTH packet - some unused metrics have been deleted to shorten the output
The DBIRTH packet also contains the last known value for a metric and needs to be loaded into influx.
{
"timestamp": 1623700502055,
"metrics": [
{
"name": "Strategy/IO/FuelLevelPot",
"alias": 17,
"timestamp": 1623700502054,
"dataType": "Float",
"properties": {
"Quality": {
"type": "Int32",
"value": 600
}
},
"value": 10287.76
},
{
"name": "Strategy/IO/ParkingLotHeaterTemp",
"alias": 18,
"timestamp": 1623700502054,
"dataType": "Float",
"properties": {
"Quality": {
"type": "Int32",
"value": 600
}
},
"value": 26.994537
},
{
"name": "Strategy/Numeric Variables/FuelLevelReading",
"alias": 19,
"timestamp": 1623700502054,
"dataType": "Float",
"properties": {
"Quality": {
"type": "Int32",
"value": 600
}
},
"value": 10287.76
},
{
"name": "Device Properties/Part Number",
"timestamp": 1623700502054,
"dataType": "String",
"value": "GRV-EPIC-PR1"
},
{
"name": "Device Properties/Controller Firmware Version",
"timestamp": 1623700502054,
"dataType": "String",
"value": "R10.4b"
},
{
"name": "Device Properties/Strategy Title",
"timestamp": 1623700502054,
"dataType": "String",
"value": "CStore; 03/18/21; 13:21:17"
},
],
"seq": 1
}

@mtnguru
Copy link
Contributor

mtnguru commented Jun 16, 2021

Example of the DDATA packet

{
"timestamp": 1623703193646,
"metrics": [
{
"name": "",
"alias": 17,
"timestamp": 1623703193646,
"dataType": "Float",
"properties": {
"Quality": {
"type": "Int32",
"value": 600
}
},
"value": 10296.559
},
{
"name": "",
"alias": 18,
"timestamp": 1623703193646,
"dataType": "Float",
"properties": {
"Quality": {
"type": "Int32",
"value": 600
}
},
"value": 27.807373
},
{
"name": "",
"alias": 19,
"timestamp": 1623703193646,
"dataType": "Float",
"properties": {
"Quality": {
"type": "Int32",
"value": 600
}
},
"value": 10296.559
}
],
"seq": 129
}

@shantanoo-desai
Copy link
Contributor

@mtnguru @srebhan Hi, would like to help out with SparkplugB stuff here. Have good knowledge of MQTT Consumer for telegraf as well as the processors plugins for telegraf. Would be happy to try things out for you

@mtnguru
Copy link
Contributor

mtnguru commented Jun 17, 2021

Hello @shantanoo-desai, I welcome your help, we're very close to success.

@srebhan and I talked yesterday and he created a Starlark program to save the aliases in DBIRTH packets and use that to resolve the name in the DDATA packets. I'm working today to get that working. Are you familiar with starlark.

Where are your sparkplug packets coming from, do you have a device like the Groov EPIC?

James

@shantanoo-desai
Copy link
Contributor

@mtnguru I was thinking more for setting up some dummy example from the eclipse-tahu repo and running it through a mosquitto broker instance on my machine. I could then probably spin up telegraf to achieve the acquisition into InfluxDB. I already have a stack ready tiguitto where I can play around with the configuration.

As for starklark, no really happen to apply it as of now but seems like it might do the tricks for NBIRTH and DBIRTH messages

@srebhan
Copy link
Member Author

srebhan commented Jun 18, 2021

@shantanoo-desai, @mtnguru and myself worked out a nice starlark script to convert the raw values into a nice metric. I think @mtnguru is currently polishing the script and adds some bells and whistles but he will likely publish that script as a PR or so.

@mtnguru
Copy link
Contributor

mtnguru commented Jun 18, 2021

@shantanoo-desa, @srebhan - We have the code working to load data - yay!
Let's set-up a meeting and I can walk you through what we have.
It works, but there is still work and testing to be done.
Can you meet on Skype or zoom? My skype acct is mtnplasma

@srebhan
Copy link
Member Author

srebhan commented Jun 22, 2021

@shantanoo-desa, @srebhan - We have the code working to load data - yay!
Let's set-up a meeting and I can walk you through what we have.
[...]

@mtnguru as this is not my day-job, I currently cannot afford spending the time in meetings. Please feel free to post the code here, open a separate issue or drop me a note on slack and I will try to give feedback ASAP. Sorry!

plugins/parsers/registry.go Outdated Show resolved Hide resolved
plugins/parsers/xpath/protocolbuffer_document.go Outdated Show resolved Hide resolved
@srebhan srebhan changed the title XPath parser extension to allow parsing of JSON and Protocol-buffers XPath parser extension to allow parsing of JSON, MessagePack and Protocol-buffers Jun 24, 2021
@srebhan
Copy link
Member Author

srebhan commented Jun 24, 2021

While I applied the changes @reimda requested, I also added support for parsing generic MessagePack messages, to parse arbitrary msgpack types without the need for compile-time-changes.
@sjwang90 I also tried to update the README so if you could take a look...

@helenosheaa helenosheaa merged commit 25413b2 into influxdata:master Jul 1, 2021
bhsu-ms pushed a commit to bhsu-ms/telegraf that referenced this pull request Jul 12, 2021
@linkdat
Copy link

linkdat commented Jul 14, 2021

Success! almost...... The protobuf parser is working with the mqtt_consumer plugin. However the Sparkplug specification makes it so MQTT packets are not atomic. The DBIRTH packet lists all the metric names and associates them with a numeric alias. Subsequent DDATA packets contain the numeric alias and do not have the full topic name. Therefore the input plugin has to save the names and their alias when they receive the DBIRTH packet, Upon receipt of a DDATA the alias is looked up and the associated name is appended the topic. The next 2 comments are examples of the DBIRTH and DDATA packets.

Here is the test output I'm getting from telegraf --test --debug

2021-06-15T14:51:05Z D! [parsers.protobuf::mqtt_consumer] XML document equivalent: "1716237686651029<is_null>false</is_null>Quality3<int_value>600</int_value><float_value>5401.963</float_value>1816237686651029<is_null>false</is_null>Quality3<int_value>600</int_value><float_value>23.298462</float_value>16237686651029<is_null>false</is_null>3<int_value>600</int_value>Quality<float_value>5401.963</float_value>19311623768665102"
2021-06-15T14:51:05Z D! [parsers.protobuf::mqtt_consumer] Number of selected metric nodes: 3

tahu_,host=saphira,topic=spBv1.0/cabin/DDATA/reactor1/epiclc type=9i,value="5401.963" 1623768665102000128
tahu_,host=saphira,topic=spBv1.0/cabin/DDATA/reactor1/epiclc type=9i,value="23.298462" 1623768665102000128
tahu_,host=saphira,topic=spBv1.0/cabin/DDATA/reactor1/epiclc type=9i,value="5401.963" 1623768665102000128

@mtnguru , @srebhan
Hi James,Sven
I am trying to achieve the same setup thus processing protobuf data via mqtt_consumer with the following configuration:

[[inputs.mqtt_consumer]]
  servers = ["tcp://localhost:1883"]

  topics = [
    "spBv1.0/#",
  ]

  client_id = "telegraf"
  data_format = "xpath_protobuf"
  
    [[inputs.mqtt_consumer.xpath_protobuf]]
        metric_selection = "//metrics"
        metric_name = "metrics"
        timestamp = "timestamp"

    [inputs.mqtt_consumer.xpath_protobuf.tags]
        dataType   = "dataType"

    [inputs.mqtt_consumer.xpath_protobuf.fields_int]
        value = "value"

When I load this config with Telegraf v1.19.1, I get following error (also when replacing xpath_protobuf with protobuf) so I guess I am missing a configuration step or perhaps a specific version, can you please point me in the right direction?

Error running agent: Error loading config file /etc/telegraf/telegraf.conf: error parsing file, Invalid data format: xpath_protobuf

Just for reference an example of the test data I am processing (as protobuf):

{"timestamp":1626248987736,"metrics":[{"name":"test1","dataType":"Float","value":-1.0},{"name":"test2","dataType":"Float","value":-1.0},{"name":"test3","dataType":"Boolean","value":false}],"seq":75}

Many thanks for your feedback!

@srebhan
Copy link
Member Author

srebhan commented Jul 14, 2021

@linkdat this parser is only available in the "master" branch and not yet released as a version. You can use a nightly build or build telegraf from git.
Please also note that you need to set xpath_protobuf_file and xpath_protobuf_type as described in the docu.

@linkdat
Copy link

linkdat commented Jul 14, 2021

@linkdat this parser is only available in the "master" branch and not yet released as a version. You can use a nightly build or build telegraf from git.
Please also note that you need to set xpath_protobuf_file and xpath_protobuf_type as described in the docu.

@srebhan , Thank you for the swift reply!

@martinscheffler
Copy link

martinscheffler commented Sep 8, 2021

Hi all, thanks for implementing sparkplug b, it is very much appreciated!

I just tested the nightly build, and I get an error as soon as an mqtt message comes in:

telegraf | 2021-09-08T12:40:21Z I! [inputs.mqtt_consumer] Connected [tcp://mqtt:1883]
telegraf | panic: interface conversion: interface {} is nil, not string
telegraf |
telegraf | goroutine 43 [running]:
telegraf | github.com/influxdata/telegraf/plugins/parsers/xpath.(*Parser).parseQuery(0xc000aa0280, {0x0, 0x0, 0x7d394e0}, {0x43e7a20, 0xc0001a1310}, {0x43e7a20, 0xc0001a1ae0}, {{0xc000a8f520, 0xd}, ...})
telegraf | /go/src/github.com/influxdata/telegraf/plugins/parsers/xpath/parser.go:169 +0x1a6d
telegraf | github.com/influxdata/telegraf/plugins/parsers/xpath.(*Parser).Parse(0xc000aa0280, {0xc0004a21b0, 0x2b, 0x2b})
telegraf | /go/src/github.com/influxdata/telegraf/plugins/parsers/xpath/parser.go:107 +0x4d0
telegraf | github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer.(*MQTTConsumer).onMessage(0xc0008f04e0, {0x5428848, 0xc000f16fa8}, {0x53f5060, 0xc0001a1270})
telegraf | /go/src/github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer/mqtt_consumer.go:282 +0x74
telegraf | github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer.(*MQTTConsumer).recvMessage(0xc0008f04e0, {0x0, 0x2}, {0x53f5060, 0xc0001a1270})
telegraf | /go/src/github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer/mqtt_consumer.go:271 +0x210
telegraf | github.com/eclipse/paho%2emqtt%2egolang.(*router).matchAndDispatch(0xc000f88f90, 0x0, 0x1, 0xc000ec4600)
telegraf | /go/pkg/mod/github.com/eclipse/paho.mqtt.golang@v1.3.0/router.go:171 +0x76d
telegraf | github.com/eclipse/paho%2emqtt%2egolang.(*client).startCommsWorkers.func1()
telegraf | /go/pkg/mod/github.com/eclipse/paho.mqtt.golang@v1.3.0/client.go:504 +0x31
telegraf | created by github.com/eclipse/paho%2emqtt%2egolang.(*client).startCommsWorkers
telegraf | /go/pkg/mod/github.com/eclipse/paho.mqtt.golang@v1.3.0/client.go:503 +0x3bd
telegraf exited with code 2`

My telegraf.conf:

[[outputs.influxdb_v2]]

urls = ["http://influxdb:8086"]

token = "93WiOaj20yp7kk8wvqL0CPkg_hI_9KfQYtasKYSKFUGosmvvTZhnD8Z1VbS5pCn0hvOPr4nwWRmBNrvlf4mm6g=="
organization = "myorg">
bucket = "telegraf"

[[inputs.ping]]
urls = ["influxdb"]

[[inputs.mqtt_consumer]]
servers = ["tcp://mqtt:1883"]

topics = [
"spBv1.0/#",
]

xpath_protobuf_file = "/sparkplug_b.proto"
xpath_protobuf_type = "org.eclipse.tahu.protobuf.Payload"

client_id = "telegraf"
data_format = "xpath_protobuf"

[[inputs.mqtt_consumer.xpath_protobuf]]
metric_selection = "//metrics"
metric_name = "metrics"
timestamp = "timestamp"

[inputs.mqtt_consumer.xpath_protobuf.tags]
dataType = "dataType"

[inputs.mqtt_consumer.xpath_protobuf.fields_int]
value = "value"

Any ideas?

@srebhan srebhan deleted the xpath_parser_extension branch September 14, 2021 08:37
@srebhan
Copy link
Member Author

srebhan commented Sep 14, 2021

@martinscheffler can you please open an issue for your problem (remember to redact tokens and credentials ;-)) and report what kind of raw data you receive by setting xpath_print_document = true. Please assign me to that issue you create, so I can take a look.

@martinscheffler
Copy link

@srebhan Thank you for taking a look. I opened #9756
I can't seem to assign it to you...

@pnunn
Copy link

pnunn commented Sep 22, 2021

Hi guys, I'm a late comer to this thread and am trying to get exactly this working on my setup.

I have tried using the config from @srebhan above but am getting an error loading the config saying inputs.mqtt_consumer.protobuf.fields_init on line 61 conflicts with line 49 and have no clue why..

My config is

[global_tags]
 user = "pnunn"

[[outputs.influxdb_v2]]
 urls = ["http://10.100.2.52:8086"]

 ## Token for authentication.
 ##token = "$DOCKER_INFLUXDB_INIT_ADMIN_TOKEN"
 token = "token_here"

 ## Organization is the name of the organization you wish to write to; must exist.
 ##organization = "$DOCKER_INFLUXDB_INIT_ORG"
 organization = "Blah"

 ## Destination bucket to write into.
 ##bucket = "$DOCKER_INFLUXDB_INIT_BUCKET"
 bucket = "MQTTBucket"

[[inputs.cpu]]
  percpu=true

[[inputs.mqtt_consumer]]
  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
  ## clusters or standalone servers, use a seperate plugin instance.
  ##   example: servers = ["tcp://localhost:1883"]
  ##            servers = ["ssl://localhost:1883"]
  ##            servers = ["ws://localhost:1883"]
  servers = ["tcp://10.100.0.37:1883"]

  ## Topics that will be subscribed to.
  ##topics = [
  ##  "telegraf/host01/cpu",
  ##  "telegraf/+/mem",
  ##  "telegraf/#",
  topics = [
    "spBv1.0/#",
  ]
  data_format = "protobuf"
  xpath_protobuf_type = "org.eclipse.tahu.protobuf.Payload"
  xpath_print_document = true

    [[inputs.mqtt_consumer.protobuf]]
      metric_selection = "metrics[not(template_value)]"
      metric_name = "concat('tahu_', substring-after(name, ' '))"
      timestamp = "timestamp"
      timestamp_format = "unix_ms"
      [inputs.file.protobuf.tags]
        name = "substring-after(name, ' ')"
      [inputs.mqtt_consumer.protobuf.fields_int]
        type = "datatype"
      [inputs.mqtt_consumer.protobuf.fields]
        value = "(int_value | long_value | float_value | double_value | boolean_value | string_value)"

    [[inputs.mqtt_consumer.protobuf.protobuf]]
      metric_selection = "metrics/template_value/metrics"
      metric_name = "concat('tahu_', name)"
      timestamp = "timestamp"
      timestamp_format = "unix_ms"
      [inputs.mqtt_consumer.protobuf.protobuf.tags]
        name = "substring-after(name, ' ')"
      [inputs.mqtt_consumer.protobuf.fields_int]
        type = "datatype"
      [inputs.mqtt_consumer.protobuf.fields]
        value = "(int_value | long_value | float_value | double_value | boolean_value | string_value)"

Just wondering if anyone can point me in the right direction please?

I think I already see a major problem.. this is now xpath_protobuf not protobuf by the looks of the docs...

The docs also say

For using the protocol-buffer format you need to specify a protocol buffer definition file (.proto) in xpath_protobuf_file, Furthermore, you need to specify which message type you want to use via xpath_protobuf_type.

Where would I find a .proto files for SparkplugB (if one exists) and how do I use it if it does?

Has anyone managed to get this working?

Ta
Peter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat Improvement on an existing feature such as adding a new setting/mode to an existing plugin new plugin plugin/parser 1. Request for new parser plugins 2. Issues/PRs that are related to parser plugins
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add protobuf parser
8 participants