Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OWON OW18E on Python #4

Open
ullix opened this issue May 8, 2024 · 32 comments
Open

OWON OW18E on Python #4

ullix opened this issue May 8, 2024 · 32 comments

Comments

@ullix
Copy link

ullix commented May 8, 2024

I am using owon_multi_cli via subprocess in Python. It works, thx for the code! But some things create headaches.

I currently use this without sudo and have no problems. Can I rely on not needing sudo in the future, or does it depend on other settings? If I needed sudo, the use of owon_multi_cli would become impossible for me due to the need for inclusion in bigger project. Could you explain when and why it could be required?

As I need it in Python, I am wondering whether it may now or in the future become available as a "Pure-Python" program?

I am using subprocess in this way:

scmd = ["OWON/owon_multi_cli", '-a', 'A6:C0:80:E3:85:9D', '-t', 'ow18e', "-1"]
suboutput = subprocess.run(scmd, capture_output=True)
value = float((suboutput.stdout.decode('UTF-8').strip().split(" "))[1])

Code is ok, but it runs for 1 ... 1.5 sec! This is long. Any option for making it shorter?

@ullix ullix changed the title OW18E on Python OWON OW18E on Python May 8, 2024
@sercona
Copy link
Owner

sercona commented May 8, 2024

hi,

I did not write that part of the code; it was already working even though its a bit indirect, the way its designed. pros: it lets an already-stable command do all the heavy lifting. with bluetooth, that's not at all trivial. I agree for purity it would be better to have a pure python solution, and if one can be found that is close to what this app needs, I'm all for trying to all that option. but currently there are no plans to convert to pure python. I'm not against it, I just dont have the time for that one right now. sudo is needed because that cli command that the app forks needs it. it should be possible to install 'setuid' so that any user can invoke that (and then) this app.

you are running this 'connectionless' style, so to speak, in that you are running this whole app just to get one reading, correct? and you want less 'startup tax' to get that single value? if that's the case, I'd suggest running this in the background and having the output go to a file. your other app (that does your real work) would essentially follow or 'tail -F' that log file and be able to get the values immediately. in fact, you could even just do a 'tail' from shell on that live, running log file and get the last line or last few lines.

alternately, to be more elegant, the app could multicast to a bus, like MQTT. let it blast it's heart out to mqtt and you can connect as a client and get values connectionless style, if you want, or connection-oriented if you want a bit less latency. there's no support for mqtt currently, but that's something that could be done without too much effort. if that would get you by, let me know.

@ullix
Copy link
Author

ullix commented May 9, 2024

Yes, you understood what my needs are.

The tail thing is an idea. I tested, but it does not work. To get the intended output like 1715244520.35 2.152 DC V I start with:
/owon_multi_cli -a A6:C0:80:E3:85:9D -t ow18e >owon.txt

Then I read with: tail -n 1 -F owon.txt
resulting in: 1715244452.58 2.1

which is an incomplete output. After a few moments this partial line is completed, and several hundred lines are also being put out, again ending in an incomplete line:

1715244452.58 2.165 DC V
#hundreds of lines ...
1715244681.33 2.165 DC V
1715244681.24 2.164 DC V
1715244681.42 2.162 DC V
1715244681.63 2.159 DC V
1715244681.84 2.156 DC V
1715244682.23 2.155 DC V
1715244682.44 2.15

Looks like OS caused text buffering before saving to file. Not an option, I'd say.

MQTT could work. My code is already set up to read MQTT. Or your script could blare to WiFi as a WiFi client; my code is also ready for it.

@ullix
Copy link
Author

ullix commented May 9, 2024

Here is the plot of duration of calls from Python with /owon_multi_cli -a A6:C0:80:E3:85:9D -t ow18e :

grafik

"Ambient" is the duration in millisec. The range is 200 ... 1200 ms in a clearly non-random pattern. Any idea why such a strange plot results?

@ullix
Copy link
Author

ullix commented May 9, 2024

One more question: I noticed that in the response to a subprocess.run(...) the returncode is always 1, while I'd have expected it being 0 (zero).

from the Python Doc: returncode:
Exit status of the child process. Typically, an exit status of 0 indicates that it ran successfully.

response:
args=['gtools/owon_multi_cli', '-a', 'A6:C0:80:E3:85:9D', '-t', 'ow18e', '-1'], returncode=1, stdout=b'1715254147.60 2.154 DC V\n', stderr=b''

Should it not be zero? Is it an indication of something being wrong though all seems fine?

@sercona
Copy link
Owner

sercona commented May 9, 2024

you are probably running into python line buffering. I wrote about it in the README. python aggressively buffers. you can disable it by using -u on the command line (not in the shebang, it does not work there).

but mqtt is really the better way since you wont have any disk buffering to worry about and the response time should be acceptable.

@sercona
Copy link
Owner

sercona commented May 9, 2024

One more question: I noticed that in the response to a subprocess.run(...) the returncode is always 1, while I'd have expected it being 0 (zero).

from the Python Doc: returncode:
Exit status of the child process. Typically, an exit status of 0 indicates that it ran successfully.

response: args=['gtools/owon_multi_cli', '-a', 'A6:C0:80:E3:85:9D', '-t', 'ow18e', '-1'], returncode=1, stdout=b'1715254147.60 2.154 DC V\n', stderr=b''

Should it not be zero? Is it an indication of something being wrong though all seems fine?

I found where exit(1) was called. it should be fixed in the main branch now (thanks).

@sercona
Copy link
Owner

sercona commented May 9, 2024

Here is the plot of duration of calls from Python with /owon_multi_cli -a A6:C0:80:E3:85:9D -t ow18e :

grafik

"Ambient" is the duration in millisec. The range is 200 ... 1200 ms in a clearly non-random pattern. Any idea why such a strange plot results?

so that I understand this: you are timing how long it takes for the subprocess call to complete? that graph is about how long each api call takes, is that correct?

again, disk buffering could easily explain latency in syncing to disk.

see if the mqtt idea works for you. I might even look into adding it to the mainline since it would be useful to have as a feature.

understand that the meter is not a fast reading meter and this really would not be ideal as a DAQ for something that needed to ensure fast data and drop-free data. there are better solutions for data logging. (another project I'm working on uses the ut61e from uni-t brand and that has an optical link like many dmm's do and I have some code and 3d printed dongle for the meter that will read its values and not go thru bluetooth. one advantage of that older uni-t meter is that it never turns off on its own, so you can run it for as long as you give the dmm battery power. most meters turn off after a while so its not great for longer term data logging)

@ullix
Copy link
Author

ullix commented May 10, 2024

so that I understand this: you are timing how long it takes for the subprocess call to complete? that graph is about how long each api call takes, is that correct?

Actually, it is the complete call including the return with a value, like "3.123" (Volt), but the subprocess call itself takes 99% of that time.

My program GeigerLog (https://sourceforge.net/projects/geigerlog/ ) typically uses a cycle time of 1 sec. Ideally, anything called should be well under 1 sec duration. But while the owon_multi_cli manages a call in as fast as 0.2 sec, the high end can take 1.5 sec! This spread is more of a nuisance than the absolute time :-(

again, disk buffering could easily explain latency in syncing to disk.

What syncing? I am not redirecting stdout to anything. Is there some disk writing inside of owon_multi_cli? If so, can this be eliminated?

As said, GeigerLog can already handle MQTT. But if I am not mistaken, this is not (yet) present in your code?

@ullix
Copy link
Author

ullix commented May 10, 2024

you are probably running into python line buffering. I wrote about it in the README. python aggressively buffers. you can disable it by using -u on the command line (not in the shebang, it does not work there).

Just to note: this buffering issue did arise on the shell, NOT under Python!

And I saw you have a '-l' logging command. Tried this also, but same result with tail as with redirection.

@ullix
Copy link
Author

ullix commented May 10, 2024

I found some (simple) Python code to read Bluetooth. After enriching it with your C code, I get at least the voltage reading right. Here is the Python code:

#! /usr/bin/env python3
# -*- coding: UTF-8 -*-

"""
test Bluetooth @ OW18E
"""

import time
import asyncio
from bleak import BleakClient

ADDRESS             = "A6:C0:80:E3:85:9D"
CHARACTERISTIC_UUID = "0000fff4-0000-1000-8000-00805f9b34fb"        # Output like: 23 f0 04 00 5b 0f

def notification_handler(sender, data):

    global lasttime

    nowtime = time.time()
    print(f"{nowtime - lasttime:0.3f} sec   Data OW18E: {data.hex(' ')}", end = "  ")
    print("value: ", data[5] <<8 |data[4], end="   ")
    lasttime = nowtime

    # Owon meter logic (from: https://github.com/sercona/Owon-Multimeters/blob/main/owon_multi_cli.c)
    reading =  [0, 0, 0]

    # convert bytes to 'reading' array
    reading[0] = data[1] << 8 | data[0]
    reading[1] = data[3] << 8 | data[2]
    reading[2] = data[5] << 8 | data[4]

    # Extract data items from first field
    function = (reading[0] >> 6) & 0x0f
    scale    = (reading[0] >> 3) & 0x07
    decimal  = reading[0] & 0x07

    # Extract and convert measurement value (sign)
    if (reading[2] < 0x7fff): measurement = reading[2]
    else:                     measurement = -1 * (reading[2] & 0x7fff)
    print(f"measurement: {measurement / 10**decimal:0.3f}")


async def run(address):

    global lasttime

    client = BleakClient(address)
    async with BleakClient(address) as client:
        lasttime = time.time()
        await client.start_notify(CHARACTERISTIC_UUID, notification_handler)
        await asyncio.sleep(60.0)  #show data for a minute
        await client.stop_notify(CHARACTERISTIC_UUID)


if __name__ == "__main__":
    try:
        loop = asyncio.get_event_loop()
        loop.set_debug(False)
        loop.run_until_complete(run(ADDRESS))
    except KeyboardInterrupt:
        loop.stop()

and here output from this code:

0.060 sec   Data OW18E: 23 f0 04 00 c0 08  value:  2240   measurement: 2.240
0.392 sec   Data OW18E: 23 f0 04 00 bf 08  value:  2239   measurement: 2.239
0.508 sec   Data OW18E: 23 f0 04 00 bf 08  value:  2239   measurement: 2.239
0.512 sec   Data OW18E: 23 f0 04 00 c0 08  value:  2240   measurement: 2.240
0.180 sec   Data OW18E: 23 f0 04 00 c1 08  value:  2241   measurement: 2.241
0.208 sec   Data OW18E: 23 f0 04 00 c2 08  value:  2242   measurement: 2.242
0.420 sec   Data OW18E: 23 f0 04 00 c3 08  value:  2243   measurement: 2.243
0.512 sec   Data OW18E: 23 f0 04 00 c3 08  value:  2243   measurement: 2.243
0.300 sec   Data OW18E: 23 f0 04 00 c2 08  value:  2242   measurement: 2.242
0.388 sec   Data OW18E: 23 f0 04 00 c1 08  value:  2241   measurement: 2.241
0.212 sec   Data OW18E: 23 f0 04 00 c0 08  value:  2240   measurement: 2.240
0.508 sec   Data OW18E: 23 f0 04 00 c0 08  value:  2240   measurement: 2.240

Faster, but also not a speed champion, and with quite a variation in the speed from 60 ... 512 ms. Well, ok.

But now I am facing a different problem: my program GeigerLog is using PyQt5, which does not cooperate with asyncio :-((.

@sercona
Copy link
Owner

sercona commented May 11, 2024

I wonder how much the meter's update rate (and bluetooth overhead) is slowing things down.

I think the sample rate of the meter is twice a second, so that would come close to your 512ms value.

remember, this is a low end hobby grade meter. if you need better data aquisition, I do suggest avoiding bluetooth and going direct serial.

a full native python app that does not have to keep forking another command will surely be faster, but it also adds a lot more complexity and failure modes. I have to admit I'm not a huge fan of BLE and I'd never use it for mission critical work.

if you can get your native python app to just keep sending data over mqtt, that level of client/server abstraction will probably work well for you. pend on a socket (regular tcp sockets) or modern websockets. something network based. that would be my choice if I had to use this meter and had to make it as fast as possible.

@sercona
Copy link
Owner

sercona commented May 11, 2024

how about this as an intermediate step? you can get the BLE data via your method and then call the C routine to parse it and expand to human printable format.

it would mean converting this existing c code to be more like an api call or a standalone app that JUST converts the bytes you'd get from BLE into human printable output.

then you can use the existing logic in this program but skip its method of doing BLE.

another idea: have YOUR app send data to mqtt and this app, as a C program, could just recieve the mqtt BLE bytes (from that characteristic) and convert to printable format. a network service that converts the ble bytes to printable.

lots of ways to make this work.

@ullix
Copy link
Author

ullix commented May 11, 2024

The OW18E specs say it does 3 samples per sec. In the Bluetooth logs I find quite often 5 measurements per seconds, and each one having a different Voltage, i.e. these are new values. Over several thousand measurements I found a range of 19 ... 608 ms between log values. Quite a spread.

I do have an old DVM sending data via RS232, which is a lot more consistent with respect to sampling duration. Though, this is clumsy with its cables and USB-converters. But Bluetooth really is for masochists ;-).

I prefer Python for anything on the desktop, because the same code works on multiple systems, including Linux, Windows, or Mac. On microchips, of course I use C/C++.

You may have noticed that I have already taken from your code to implement the data conversion.

In my search I also found a Python offering for the GATT mechanism: https://pypi.org/project/pygatt/. I tried it out, and it works. However, it also needs sudo, which makes it impossible to implement in my main code.

Finally I settled for the bleak and asyncio code I posted above and have it run as a separate program which forwards the OW18E data to my main program GeigerLog as a WiFiClient (transfer the data in a GET call). For anyone interested it will be in the next (pre-)release here https://sourceforge.net/p/geigerlog/discussion/devel/

lots of ways to make this work.

Indeed! And thanks again for your code and comments, which finally made this work for me!

@ullix
Copy link
Author

ullix commented May 12, 2024

Yet another Python solution: SimplePyBLE https://pypi.org/project/simplepyble/
And neither a need for async nor for sudo. Here is code, running fast and well, and so far no failure observed:

import simplepyble
import time
import datetime         as dt

peripheral = None
lasttime   = None

def longstime():
    """Return current time as YYYY-MM-DD HH:MM:SS.mmm, (mmm = millisec)"""
    return dt.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")[:-3] # ms resolution


def main():
    global peripheral, lasttime

    adapters = simplepyble.Adapter.get_adapters()
    adapter  = adapters[0] # I have only 1
    print(f"Selected adapter: {adapter.identifier()} [{adapter.address()}]")

    adapter.set_callback_on_scan_start(lambda: print("Scan started."))
    adapter.set_callback_on_scan_stop (lambda: print("Scan complete."))
    adapter.set_callback_on_scan_found(lambda peripheral: print(f"Found {peripheral.identifier()} [{peripheral.address()}]"))

    adapter.scan_for(1000)                      # Scan for N milliseconds
    peripherals = adapter.scan_get_results()

    BDMfound = False
    for peripheral in peripherals:
        if peripheral.identifier() == "BDM":
            BDMfound = True
            break

    if not BDMfound:
        print( "Peripheral BDM not found - exiting")
        return

    print(f"Connecting to: {peripheral.identifier()} [{peripheral.address()}]")
    peripheral.connect()

    services = peripheral.services()
    service_characteristic_pair = []
    for service in services:
        for characteristic in service.characteristics():
            service_characteristic_pair.append((service.uuid(), characteristic.uuid()))

    lasttime = time.time()
    try:
        choice = 7
        service_uuid, characteristic_uuid = service_characteristic_pair[choice]
        # contents = peripheral.notify(service_uuid, characteristic_uuid, lambda data: print(f"{time.time()} Notification: {data}"))
        contents = peripheral.notify(service_uuid, characteristic_uuid, printStuff)
        print("contents: ", contents)
    except Exception as e:
        print(e)

    while True: time.sleep(1)     # stop with CTRL C


def printStuff(data):
    # print(f"{time.time():0.8f} Notification: {data}", end="   ")

    global lasttime

    defname = "printStuff: "

    nowtime   = time.time()
    durtime = 1000 * (nowtime - lasttime)
    print(f"{longstime()} {defname}Dur:{durtime:3.0f} ms  OW18E:'{data.hex(' ')}'", end = "  ")
    lasttime = nowtime

    # Owon meter logic (from: https://github.com/sercona/Owon-Multimeters/blob/main/owon_multi_cli.c)
    # convert bytes to 'reading' array
    reading =  [0, 0, 0]
    reading[0] = data[1] << 8 | data[0]
    reading[1] = data[3] << 8 | data[2]
    reading[2] = data[5] << 8 | data[4]

    # Extract data items from first field
    function = (reading[0] >> 6) & 0x0f
    scale    = (reading[0] >> 3) & 0x07
    decimal  = reading[0]        & 0x07

    # Extract and convert measurement value (sign)
    if (reading[2] < 0x7fff):     measurement = reading[2]
    else:                         measurement = -1 * (reading[2] & 0x7fff)

    print(f"meas: {measurement}  fon: {function}  scl: {scale}  dec: {decimal}", end= "  ")
    voltage = measurement / 10**decimal
    print(f"Volt: {voltage:0.3f}")


if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt as e:
        print()

    peripheral.disconnect()
    print("Disconnected")
    print()

giving such output:

2024-05-12 09:52:00.909 printStuff: Dur:388 ms  OW18E:'23 f0 0c 00 97 08'  meas: 2199  fon: 0  scl: 4  dec: 3  Volt: 2.199
2024-05-12 09:52:01.089 printStuff: Dur:180 ms  OW18E:'23 f0 0c 00 95 08'  meas: 2197  fon: 0  scl: 4  dec: 3  Volt: 2.197
2024-05-12 09:52:01.305 printStuff: Dur:216 ms  OW18E:'23 f0 0c 00 94 08'  meas: 2196  fon: 0  scl: 4  dec: 3  Volt: 2.196
2024-05-12 09:52:01.509 printStuff: Dur:204 ms  OW18E:'23 f0 0c 00 92 08'  meas: 2194  fon: 0  scl: 4  dec: 3  Volt: 2.194
2024-05-12 09:52:01.689 printStuff: Dur:180 ms  OW18E:'23 f0 0c 00 90 08'  meas: 2192  fon: 0  scl: 4  dec: 3  Volt: 2.192
2024-05-12 09:52:01.869 printStuff: Dur:180 ms  OW18E:'23 f0 0c 00 8c 08'  meas: 2188  fon: 0  scl: 4  dec: 3  Volt: 2.188
2024-05-12 09:52:02.109 printStuff: Dur:240 ms  OW18E:'23 f0 0c 00 89 08'  meas: 2185  fon: 0  scl: 4  dec: 3  Volt: 2.185
2024-05-12 09:52:02.321 printStuff: Dur:212 ms  OW18E:'23 f0 0c 00 88 08'  meas: 2184  fon: 0  scl: 4  dec: 3  Volt: 2.184
2024-05-12 09:52:02.741 printStuff: Dur:420 ms  OW18E:'23 f0 0c 00 89 08'  meas: 2185  fon: 0  scl: 4  dec: 3  Volt: 2.185
2024-05-12 09:52:02.921 printStuff: Dur:180 ms  OW18E:'23 f0 0c 00 8b 08'  meas: 2187  fon: 0  scl: 4  dec: 3  Volt: 2.187
2024-05-12 09:52:03.101 printStuff: Dur:180 ms  OW18E:'23 f0 0c 00 8a 08'  meas: 2186  fon: 0  scl: 4  dec: 3  Volt: 2.186
2024-05-12 09:52:03.309 printStuff: Dur:208 ms  OW18E:'23 f0 0c 00 89 08'  meas: 2185  fon: 0  scl: 4  dec: 3  Volt: 2.185

Just one problem: once it is run integrated into my big code GeigerLog it lasts for only 1 ... 100 sec before it crashes with a Segmentation fault! No idea what is going on :-((

@sercona
Copy link
Owner

sercona commented May 12, 2024

Just one problem: once it is run integrated into my big code GeigerLog it lasts for only 1 ... 100 sec before it crashes with a Segmentation fault! No idea what is going on :-((

I think that might have been the reason why I picked a very detached (or loosely coupled) method of getting the BLE data. on a desktop (amd64) and a rasp pi, the desktop seems to be very reliable using its motherboard BLE, but the pi often times out and is problematic. I think I tried a usb dongle with the pi as well as onboard BLE and neither were reliable. like you said, a long-ish buffer would run and then eventually something would overflow or underflow and the process would end. its why I dislike dealing directly with BLE.

if you are up for it, have you done any ESP32 BLE work? my goal, longer term, is to move all of this to the ESP chip and that would be running as a REST server over wifi. user would query the ESP32 via web/wifi and it would be in constant comms with the DMM via BLE, as a background ESP task. that would remove the pc and python and all that from the equation. I have the ESP and webserver stuff worked out but I have not yet tried to listen and decode BLE packets on the ESP.

@sercona
Copy link
Owner

sercona commented May 12, 2024

if you need to depend on data collection, my view is that a data supplier process would just handle fetching data at regular intervals using the messy BLE protocol and let the user query that via a very interoperable and simple web REST interface. if you can 'curl -s 192.168.1.123/get_value' and get the value, that's pretty much the most interoperable interface I can think of, and being connectionless it should be fast enough to fetch single values ('last value cached') from the ESP webserver side.

I have more experience with the ESP8266 and that does not have BLE. in such a case, I'd even go so far as to have some BLE device that would listen and decode that BLE stuff and send serial (115200 or 9600 speed) over ttl serial. again, very easy to interoperate with that. but the ESP32 is a single chip that should be able to be an effective 'gateway'.

@ullix
Copy link
Author

ullix commented May 13, 2024

I can well relate to what you are saying. I have used the ESP32 a lot, and really do like it. But after a first look into BLE, I stayed away from it! I used WiFi instead, which works really well and is fast on the ESP32.

I have now my simplepyble into a separate script (without any PyQt5) and are sending data by WiFi. I could just as well have used MQTT, but with WiFi it is all under my control.

@ullix
Copy link
Author

ullix commented May 22, 2024

Well, I am back to your code, but before making a suggestion I'd like to point out a bug in your code:

# Python time stamp          Your time stamp 
# 1716381338.584996    out:  1716381338.58 2.941 DC V
# 1716381338.9769561   out:  1716381338.97 2.940 DC V
# 1716381339.364931    out:  1716381339.36 2.941 DC V
# 1716381339.604926    out:  1716381339.60 2.942 DC V
# 1716381339.7849112   out:  1716381339.78 2.944 DC V
# 1716381340.0009139   out:  1716381340.85 2.945 DC V   ???
# 1716381340.1769168   out:  1716381340.17 2.943 DC V
# 1716381340.3849323   out:  1716381340.38 2.938 DC V
# 1716381340.6009152   out:  1716381340.60 2.935 DC V
# 1716381340.7769148   out:  1716381340.77 2.932 DC V
# 1716381340.9849427   out:  1716381340.98 2.928 DC V
# 1716381341.2249417   out:  1716381341.22 2.923 DC V
# 1716381341.3769596   out:  1716381341.37 2.918 DC V
# 1716381341.6169314   out:  1716381341.61 2.915 DC V
# 1716381341.7649314   out:  1716381341.76 2.916 DC V
# 1716381342.0049653   out:  1716381342.48 2.917 DC V   ???
# 1716381342.2169118   out:  1716381342.21 2.916 DC V

There is a jump in your timestamp in the lines marked with ???, which reverses in the next line. It looks like you are forgetting to account for the leading zeros in decimal part of the time stamp?

@ullix
Copy link
Author

ullix commented May 22, 2024

As already in the initial post said, your code works well, is easy to use, and easy to share. However, using it with the single value flag makes it awfully slow; too slow for me.

The Python subprocess command allows to run a shell command, and pipe back its stdout output into Python for further processing. So, the idea was to run your code freely, i.e. at max speed, and only pick the output, when I needed it.

Works in principle. Except for these dreaded buffers. Python allows to switch buffers off, or set their size to any desired value - but your code did not care. It always insisted of filling a buffer of some 4k bytes, which took way too long.

Eventually I resorted to modifying your code:

// finally, print the value and any flags (if not in quiet-mode)
if (!g.quiet) {
  if (!g.json) {
if (!strcmp(fixed_val_buf, INF_OPEN_S)) {
  printf("%s %s\n", ts, funct_s);  // no value, just a 'flag'
} else {

  //~ modified by ullix
  fprintf(stdout, "%s %s\r\n", ts, funct_s);
  fflush(stdout);
  
}
  } else  {
if (!g.debug) { printf("{ "); }  // if we are not doing a debug run, we need an open brace for proper json
printf("\"Timestamp\": \"%s\", \"Display_Value\": \"%s\" }\n", ts, funct_s);
  }
}

I believe the flush is the key element, but I am still wondering why your printf output wasn't pipeable? Anyway, this works, and I get results in 0.1(!) ... 200 ms, despite having to wade through the whole stdout pipeline . This example took 5 ms. I use a line as valid if the time stamps agree to the precision of 2 decimals (perhaps strange, but so far I always got a result):

1716385673.6478612   out: 1716385672.24 2.916 DC V   data:  ['1716385672.24', '2.916', 'DC', 'V']
1716385673.6478784   out: 1716385672.46 2.918 DC V   data:  ['1716385672.46', '2.918', 'DC', 'V']
1716385673.6479053   out: 1716385672.66 2.923 DC V   data:  ['1716385672.66', '2.923', 'DC', 'V']
1716385673.6479125   out: 1716385672.87 2.928 DC V   data:  ['1716385672.87', '2.928', 'DC', 'V']
1716385673.647918    out: 1716385673.52 2.933 DC V   data:  ['1716385673.52', '2.933', 'DC', 'V']
1716385673.6479237   out: 1716385673.26 2.937 DC V   data:  ['1716385673.26', '2.937', 'DC', 'V']
1716385673.6479294   out: 1716385673.50 2.941 DC V   data:  ['1716385673.50', '2.941', 'DC', 'V']
1716385673.6528633   out: 1716385673.65 2.946 DC V   data:  ['1716385673.65', '2.946', 'DC', 'V']
22 15:47:53.652 DEVEL  : .28047    getOW18EVoltage: Dur:5.0 ms  CmplP:1716385673.65 2.946 DC V

I am wondering if you could modify the -1 flag to make your code look for a char on stdin and only then print a single last-measured-value to stdout?

@sercona
Copy link
Owner

sercona commented May 22, 2024

There is a jump in your timestamp in the lines marked with ???, which reverses in the next line. It looks like you are forgetting to account for the leading zeros in decimal part of the time stamp?

I think that's just due to (again) buffering lag in stdout. I'm getting the timestamps from the unix system and I'm not doing anything to create the fractions; and the numbers are all monotonic as you'd expect (I have worked on car systems that actually really did go backwards in some timestamps, but this isn't that!).

the fflush() call is useful and I will add it to all the printf's, since it helps keep the output stream more 'realtime' and less buffered.

I have not checked; I dont know if calls to printf are variable in duration based on the state of the current buffer. I suppose you could background-call the printf function so that it returns immediately and does not slow the caller down. but I think that's trying to solve a problem that is better solved another way.

mqtt or some other networked bus is the better solution. it avoids all the stdout nonsense and has the additional benefit of being networked and distributed.

trying to fight with stdout is just not the best way to get 'fast realtime data'. depending on the speed of the network interconnect, its very possible that writing to a network socket (etc) is going to have faster response than writing to a buffered sequential data file on the local system. writing to mqtt should be fast enough and its an easy drop-in for the printf replacement.

@sercona
Copy link
Owner

sercona commented May 22, 2024

//~ modified by ullix
fprintf(stdout, "%s %s\r\n", ts, funct_s);
fflush(stdout);

I do tend to use fflush when I need to override the system buffering. so what you did was good and I'll add that to the C code.

but writing to stdout or a file and having something else 'tail -F' it is really inefficient. lets try mqtt and see how that works out.

@sercona
Copy link
Owner

sercona commented May 23, 2024

I am wondering if you could modify the -1 flag to make your code look for a char on stdin and only then print a single last-measured-value to stdout?

I think that makes a lot of sense. the question is, where is the best place to put the 'get me the timestamp' call? you can grab the timestamp before the call and then the duration of the call wont affect the TS. you can make the api call and then do as you say and block the TS call until the first char comes in. in a way, though, that's going to give you less accuracy in the 'true' timestamp.

my view is that when the data comes out is less important; what's more important is that the timestamp be as close to the data-fetch as you can get it.

@ullix
Copy link
Author

ullix commented May 23, 2024

I think that's just due to (again) buffering lag in stdout.

Nope. It is a bug in the code. Look at this line:
# 1716381342.0049653 out: 1716381342.48 2.917 DC V ???
Your timestamp is basically the same as the true one, were it not for the missing two zeros in the decimal part. The culprit is this piece of code:
sprintf(fract_part_s, "%ld", (1000*tv.tv_usec));
This discards any leading zeros in the decimal part of the timestamp!

Can be fixed with "%02ld", but why not simply use the complete timestamp?

but writing to stdout or a file and having something else 'tail -F' it is really inefficient. lets try mqtt and see how that works out.

I am not tailing files, I am piping. In Python:

scmd = ["./owon_multi_cli", '-a', 'A6:C0:80:E3:85:9D', '-t', 'ow18e']
proc = subprocess.Popen(scmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, shell=False)
data = proc.stdout.readline()

I believe this is just as efficient as MQTTing or WIFIing? But less configuration needed.

my view is that when the data comes out is less important; what's more important is that the timestamp be as close to the data-fetch as you can get it.

I fully agree. The OW18E can do 3 ... 5 measurements per sec, so this gives a timing uncertainty of several 100 millisec. For anything being time-wise more precise, a different instrument is needed.

My idea on the OW18E: if the -1 flag is set, nothing at all is ever put out to stdout. Only upon seeing the char at stdin you print to stdout the last available value (like: "2.935 DC V"). So this is never older than 0.5 sec. Timestamp not even needed, as the sending program keeps track anyway.

@ullix
Copy link
Author

ullix commented May 23, 2024

Even easier:
create a (new) flag which lets your code write a single line of data, like 2024-05-23 12:09:03.225, 2.929, and close the file. Overwrite this when a new reading is available. So, there will never be more than 1 reading.

The consuming program simply reads from this file. Here I would use a human-readable time stamp as there might be stale data.

I am using this now with Python code based on simplepyble, supposedly running on Linux, Windows, Mac.

@sercona
Copy link
Owner

sercona commented May 23, 2024

the problem with constantly over-writing the single data line in the file is that there's no sync between the writer and the reader. otoh, if the writer always appends and the reader blocks for any new data, you never miss anything, nothing is ever over-written and each new data item has its own unique timestamp. the client has to check for timestamp changes in the log file (a stat call can do that). monitor for time changes of the logfile and also size changes. you can leave the file open for read and just bump up your filepointer when you notice the file has been appended to by the writer.

if you really want single line status, I still suggest a network socket kind of thing, and mqtt is an easier way of dealing with network endpoints. you can register with mqtt and get notification of when new data arrives. its message passing which is the correct way (imho) to implement a loose coupling of data supplier and consumer.

@sercona
Copy link
Owner

sercona commented May 23, 2024

I think that's just due to (again) buffering lag in stdout.

Nope. It is a bug in the code. Look at this line: # 1716381342.0049653 out: 1716381342.48 2.917 DC V ??? Your timestamp is basically the same as the true one, were it not for the missing two zeros in the decimal part. The culprit is this piece of code: sprintf(fract_part_s, "%ld", (1000*tv.tv_usec)); This discards any leading zeros in the decimal part of the timestamp!

Can be fixed with "%02ld", but why not simply use the complete timestamp?

thanks, I updated the code to use the %02 prefix. the reason we have to combine them is that the struct does not have both in one field; you have to take the seconds and then add in the microseconds, but also format with the decimal point. 2 digits of fractional seconds is more than enough and arguable even just a tenth is enough.

I believe this is just as efficient as MQTTing or WIFIing? But less configuration needed.

My idea on the OW18E: if the -1 flag is set, nothing at all is ever put out to stdout. Only upon seeing the char at stdin you print to stdout the last available value (like: "2.935 DC V"). So this is never older than 0.5 sec. Timestamp not even needed, as the sending program keeps track anyway.

I've found, just from experience over the years, that keeping timestamps coupled with the measured data is so useful, you never want to remove or ignore the TS if you have it. let the app decide if it wants to ignore it, but you can always use it as a 'unique id' of sorts to realize when new data is there. you and if your app is doing catch-ups in batches, it can find the last timestamp it saw, then read all the lines up to 'current' and then go back to sleep for the next batch. you never lose data and every data is timestamped, regardless of where or when the client finally gets around to reading it.

@sercona
Copy link
Owner

sercona commented May 23, 2024

another comment on timestamps; there are usually many sub-steps that are done in order to complete a full data 'row' fetch from a device. you get the request from the user (or a timer) and that's a timestamp. you send a request to the remote agent. it receives it. it queues it and asks its 'bus' for a hardware reply. the reply finally arrives in the agent. it sends it over the network. we finally get it. etc etc.

the point is that its useful to know as many timestamps as there are steps. this can help with debugging and performance tuning. you can always discard data you dont care about, later; but you can't recreate data you NOW want, that you didn't previously store ;)

and so ideally, it would be great to timestamp the very source of the data (the meter). sadly, the meter never gives tuples of (time, value) so we have to timestamp it. the BLE stack is the first thing that touches the data that comes back from the meter. that would be the best place to grab the current time and send that along with the measurement to any client layers.

if you cant (or dont want to) intercept things early in the BLE stack, then getting the timestamp from the caller of the BLE api is the next best thing. if you want to, you could try to measure the latency of the BLE stack and then subtract that out.

but again, this is a hobbyist meter and we are likely trying to make it into something it really wont ever be.

I have code that parses SCPI data from flukes and agilents and again, the older ones dont have timestamps so the best you can do is either first char that comes in from rs232 or the last char. the last char is easier since its just the last '\n' in the scpi packet and you know the whole data line is there and ready to 'action'.

@sercona
Copy link
Owner

sercona commented May 24, 2024

I'm adding some code (testing it now) to support both sender and listener sides, using mqtt. the plan is to run one instance of this app in background and it would relay all received DMM events as mqtt messages. the user can then run the app, this time in 'give me the last good data line' mode and you call that when you need that 'tail -f' kind of feature to just grab the most recent valid reading.

since this method would put the sender into a broadcast mqtt mode, you can have multiple listeners. one can consume all the events and log them to disk; or graph them, or just 'give me the last value'. and only 1 sender process that just keeps up with all received BLE events and sends them 'hot potato' to mqtt. that should be an acceptable transfer latency and no banging on the local disk just to get the last valid measurement.

@cgm999
Copy link

cgm999 commented Oct 18, 2024

On the topic of reading notifications with another python module then those listed above,this time with
gattlib which is pure python using only dbus (in the latest commit)

import time
import sys
import os
from traceback import format_exc

from gattlib import GATTRequester
from gattlib import DiscoveryService
from gattlib import AdapterNotFound,DeviceNotFound,CharacteristicNotFound


exit_asap = False

class MyRequester(GATTRequester):

    def on_connect_failed(self, msg):
        print(f"Could not connect: {msg}")
        global exit_asap
        exit_asap = True

    def on_disconnect(self):
        print("disconnected")
        global exit_asap
        exit_asap = True


def on_notification(value):
    global last_notification
    last_notification = time.time()
    print(f"notification {last_notification} {value}")


if len(sys.argv) != 3:
    print("Usage: {} <addr> <characteristic-UUID>".format(sys.argv[0]))
    sys.exit(0)


last_notification = time.time()


try:
    service = DiscoveryService("hci0"); devices = service.discover(timeout=1)
    requester = MyRequester(sys.argv[1], auto_connect=True)
    requester.enable_notifications(sys.argv[2], on_notification )
except (AdapterNotFound,DeviceNotFound,CharacteristicNotFound) as ex:
    print(f"{str(ex)}")
    sys.exit(1)

if exit_asap:
    sys.exit(1)

print("Connected, waiting events...")

try:
    while True:
        if time.time() - last_notification > 6:
            print("no notifications for X seconds")
            exit_asap = True
        if exit_asap:
            break
        time.sleep(1)
except KeyboardInterrupt:
    sys.exit(0)


sys.exit(1)

@sercona
Copy link
Owner

sercona commented Oct 18, 2024

I'll have to give that a try.

I'm undecided about converting the main decode code to python. reason is that I really want this to be running on esp32.

if it runs on esp32 then you just need wifi and you can get values over the web REST interface. that totally removes any 'host' requirements.

but I will try the python code. maybe one way to use it with the existing code it to kind of mimic the current cli command (that people seem to dislike).

better yet: run the python in the background and use mqtt messages from the ble decoder (python side). on the decode side, it just has to register as an mqtt receiver and get all the messages the sender puts out. that would be a nice distributed architecture.

@cgm999
Copy link

cgm999 commented Oct 19, 2024

I think make sense to use a separate thread that is python and just print the values in json.. and keep the decoder in PY as well.

@sercona
Copy link
Owner

sercona commented Oct 20, 2024

I just added some early beta code that will run ENTIRELY on esp32 !

Feel free to give it a try. I'm going to make it web/rest style, later; but for now it outputs to serial console with millis timestamp and ascii meter value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants