-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OWON OW18E on Python #4
Comments
hi, I did not write that part of the code; it was already working even though its a bit indirect, the way its designed. pros: it lets an already-stable command do all the heavy lifting. with bluetooth, that's not at all trivial. I agree for purity it would be better to have a pure python solution, and if one can be found that is close to what this app needs, I'm all for trying to all that option. but currently there are no plans to convert to pure python. I'm not against it, I just dont have the time for that one right now. sudo is needed because that cli command that the app forks needs it. it should be possible to install 'setuid' so that any user can invoke that (and then) this app. you are running this 'connectionless' style, so to speak, in that you are running this whole app just to get one reading, correct? and you want less 'startup tax' to get that single value? if that's the case, I'd suggest running this in the background and having the output go to a file. your other app (that does your real work) would essentially follow or 'tail -F' that log file and be able to get the values immediately. in fact, you could even just do a 'tail' from shell on that live, running log file and get the last line or last few lines. alternately, to be more elegant, the app could multicast to a bus, like MQTT. let it blast it's heart out to mqtt and you can connect as a client and get values connectionless style, if you want, or connection-oriented if you want a bit less latency. there's no support for mqtt currently, but that's something that could be done without too much effort. if that would get you by, let me know. |
Yes, you understood what my needs are. The tail thing is an idea. I tested, but it does not work. To get the intended output like Then I read with: which is an incomplete output. After a few moments this partial line is completed, and several hundred lines are also being put out, again ending in an incomplete line:
Looks like OS caused text buffering before saving to file. Not an option, I'd say. MQTT could work. My code is already set up to read MQTT. Or your script could blare to WiFi as a WiFi client; my code is also ready for it. |
One more question: I noticed that in the response to a
response: Should it not be zero? Is it an indication of something being wrong though all seems fine? |
you are probably running into python line buffering. I wrote about it in the README. python aggressively buffers. you can disable it by using -u on the command line (not in the shebang, it does not work there). but mqtt is really the better way since you wont have any disk buffering to worry about and the response time should be acceptable. |
I found where exit(1) was called. it should be fixed in the main branch now (thanks). |
so that I understand this: you are timing how long it takes for the subprocess call to complete? that graph is about how long each api call takes, is that correct? again, disk buffering could easily explain latency in syncing to disk. see if the mqtt idea works for you. I might even look into adding it to the mainline since it would be useful to have as a feature. understand that the meter is not a fast reading meter and this really would not be ideal as a DAQ for something that needed to ensure fast data and drop-free data. there are better solutions for data logging. (another project I'm working on uses the ut61e from uni-t brand and that has an optical link like many dmm's do and I have some code and 3d printed dongle for the meter that will read its values and not go thru bluetooth. one advantage of that older uni-t meter is that it never turns off on its own, so you can run it for as long as you give the dmm battery power. most meters turn off after a while so its not great for longer term data logging) |
Actually, it is the complete call including the return with a value, like "3.123" (Volt), but the subprocess call itself takes 99% of that time. My program GeigerLog (https://sourceforge.net/projects/geigerlog/ ) typically uses a cycle time of 1 sec. Ideally, anything called should be well under 1 sec duration. But while the
What syncing? I am not redirecting stdout to anything. Is there some disk writing inside of As said, GeigerLog can already handle MQTT. But if I am not mistaken, this is not (yet) present in your code? |
Just to note: this buffering issue did arise on the shell, NOT under Python! And I saw you have a '-l' logging command. Tried this also, but same result with tail as with redirection. |
I found some (simple) Python code to read Bluetooth. After enriching it with your C code, I get at least the voltage reading right. Here is the Python code:
and here output from this code:
Faster, but also not a speed champion, and with quite a variation in the speed from 60 ... 512 ms. Well, ok. But now I am facing a different problem: my program GeigerLog is using PyQt5, which does not cooperate with asyncio :-((. |
I wonder how much the meter's update rate (and bluetooth overhead) is slowing things down. I think the sample rate of the meter is twice a second, so that would come close to your 512ms value. remember, this is a low end hobby grade meter. if you need better data aquisition, I do suggest avoiding bluetooth and going direct serial. a full native python app that does not have to keep forking another command will surely be faster, but it also adds a lot more complexity and failure modes. I have to admit I'm not a huge fan of BLE and I'd never use it for mission critical work. if you can get your native python app to just keep sending data over mqtt, that level of client/server abstraction will probably work well for you. pend on a socket (regular tcp sockets) or modern websockets. something network based. that would be my choice if I had to use this meter and had to make it as fast as possible. |
how about this as an intermediate step? you can get the BLE data via your method and then call the C routine to parse it and expand to human printable format. it would mean converting this existing c code to be more like an api call or a standalone app that JUST converts the bytes you'd get from BLE into human printable output. then you can use the existing logic in this program but skip its method of doing BLE. another idea: have YOUR app send data to mqtt and this app, as a C program, could just recieve the mqtt BLE bytes (from that characteristic) and convert to printable format. a network service that converts the ble bytes to printable. lots of ways to make this work. |
The OW18E specs say it does 3 samples per sec. In the Bluetooth logs I find quite often 5 measurements per seconds, and each one having a different Voltage, i.e. these are new values. Over several thousand measurements I found a range of 19 ... 608 ms between log values. Quite a spread. I do have an old DVM sending data via RS232, which is a lot more consistent with respect to sampling duration. Though, this is clumsy with its cables and USB-converters. But Bluetooth really is for masochists ;-). I prefer Python for anything on the desktop, because the same code works on multiple systems, including Linux, Windows, or Mac. On microchips, of course I use C/C++. You may have noticed that I have already taken from your code to implement the data conversion. In my search I also found a Python offering for the GATT mechanism: https://pypi.org/project/pygatt/. I tried it out, and it works. However, it also needs sudo, which makes it impossible to implement in my main code. Finally I settled for the
Indeed! And thanks again for your code and comments, which finally made this work for me! |
Yet another Python solution: SimplePyBLE https://pypi.org/project/simplepyble/
giving such output:
Just one problem: once it is run integrated into my big code GeigerLog it lasts for only 1 ... 100 sec before it crashes with a Segmentation fault! No idea what is going on :-(( |
I think that might have been the reason why I picked a very detached (or loosely coupled) method of getting the BLE data. on a desktop (amd64) and a rasp pi, the desktop seems to be very reliable using its motherboard BLE, but the pi often times out and is problematic. I think I tried a usb dongle with the pi as well as onboard BLE and neither were reliable. like you said, a long-ish buffer would run and then eventually something would overflow or underflow and the process would end. its why I dislike dealing directly with BLE. if you are up for it, have you done any ESP32 BLE work? my goal, longer term, is to move all of this to the ESP chip and that would be running as a REST server over wifi. user would query the ESP32 via web/wifi and it would be in constant comms with the DMM via BLE, as a background ESP task. that would remove the pc and python and all that from the equation. I have the ESP and webserver stuff worked out but I have not yet tried to listen and decode BLE packets on the ESP. |
if you need to depend on data collection, my view is that a data supplier process would just handle fetching data at regular intervals using the messy BLE protocol and let the user query that via a very interoperable and simple web REST interface. if you can 'curl -s 192.168.1.123/get_value' and get the value, that's pretty much the most interoperable interface I can think of, and being connectionless it should be fast enough to fetch single values ('last value cached') from the ESP webserver side. I have more experience with the ESP8266 and that does not have BLE. in such a case, I'd even go so far as to have some BLE device that would listen and decode that BLE stuff and send serial (115200 or 9600 speed) over ttl serial. again, very easy to interoperate with that. but the ESP32 is a single chip that should be able to be an effective 'gateway'. |
I can well relate to what you are saying. I have used the ESP32 a lot, and really do like it. But after a first look into BLE, I stayed away from it! I used WiFi instead, which works really well and is fast on the ESP32. I have now my |
Well, I am back to your code, but before making a suggestion I'd like to point out a bug in your code:
There is a jump in your timestamp in the lines marked with |
As already in the initial post said, your code works well, is easy to use, and easy to share. However, using it with the single value flag makes it awfully slow; too slow for me. The Python Works in principle. Except for these dreaded buffers. Python allows to switch buffers off, or set their size to any desired value - but your code did not care. It always insisted of filling a buffer of some 4k bytes, which took way too long. Eventually I resorted to modifying your code:
I believe the flush is the key element, but I am still wondering why your printf output wasn't pipeable? Anyway, this works, and I get results in 0.1(!) ... 200 ms, despite having to wade through the whole stdout pipeline . This example took 5 ms. I use a line as valid if the time stamps agree to the precision of 2 decimals (perhaps strange, but so far I always got a result):
I am wondering if you could modify the |
I think that's just due to (again) buffering lag in stdout. I'm getting the timestamps from the unix system and I'm not doing anything to create the fractions; and the numbers are all monotonic as you'd expect (I have worked on car systems that actually really did go backwards in some timestamps, but this isn't that!). the fflush() call is useful and I will add it to all the printf's, since it helps keep the output stream more 'realtime' and less buffered. I have not checked; I dont know if calls to printf are variable in duration based on the state of the current buffer. I suppose you could background-call the printf function so that it returns immediately and does not slow the caller down. but I think that's trying to solve a problem that is better solved another way. mqtt or some other networked bus is the better solution. it avoids all the stdout nonsense and has the additional benefit of being networked and distributed. trying to fight with stdout is just not the best way to get 'fast realtime data'. depending on the speed of the network interconnect, its very possible that writing to a network socket (etc) is going to have faster response than writing to a buffered sequential data file on the local system. writing to mqtt should be fast enough and its an easy drop-in for the printf replacement. |
I do tend to use fflush when I need to override the system buffering. so what you did was good and I'll add that to the C code. but writing to stdout or a file and having something else 'tail -F' it is really inefficient. lets try mqtt and see how that works out. |
I think that makes a lot of sense. the question is, where is the best place to put the 'get me the timestamp' call? you can grab the timestamp before the call and then the duration of the call wont affect the TS. you can make the api call and then do as you say and block the TS call until the first char comes in. in a way, though, that's going to give you less accuracy in the 'true' timestamp. my view is that when the data comes out is less important; what's more important is that the timestamp be as close to the data-fetch as you can get it. |
Nope. It is a bug in the code. Look at this line: Can be fixed with
I am not tailing files, I am piping. In Python:
I believe this is just as efficient as MQTTing or WIFIing? But less configuration needed.
I fully agree. The OW18E can do 3 ... 5 measurements per sec, so this gives a timing uncertainty of several 100 millisec. For anything being time-wise more precise, a different instrument is needed. My idea on the OW18E: if the -1 flag is set, nothing at all is ever put out to stdout. Only upon seeing the char at stdin you print to stdout the last available value (like: "2.935 DC V"). So this is never older than 0.5 sec. Timestamp not even needed, as the sending program keeps track anyway. |
Even easier: The consuming program simply reads from this file. Here I would use a human-readable time stamp as there might be stale data. I am using this now with Python code based on simplepyble, supposedly running on Linux, Windows, Mac. |
the problem with constantly over-writing the single data line in the file is that there's no sync between the writer and the reader. otoh, if the writer always appends and the reader blocks for any new data, you never miss anything, nothing is ever over-written and each new data item has its own unique timestamp. the client has to check for timestamp changes in the log file (a stat call can do that). monitor for time changes of the logfile and also size changes. you can leave the file open for read and just bump up your filepointer when you notice the file has been appended to by the writer. if you really want single line status, I still suggest a network socket kind of thing, and mqtt is an easier way of dealing with network endpoints. you can register with mqtt and get notification of when new data arrives. its message passing which is the correct way (imho) to implement a loose coupling of data supplier and consumer. |
thanks, I updated the code to use the %02 prefix. the reason we have to combine them is that the struct does not have both in one field; you have to take the seconds and then add in the microseconds, but also format with the decimal point. 2 digits of fractional seconds is more than enough and arguable even just a tenth is enough.
I've found, just from experience over the years, that keeping timestamps coupled with the measured data is so useful, you never want to remove or ignore the TS if you have it. let the app decide if it wants to ignore it, but you can always use it as a 'unique id' of sorts to realize when new data is there. you and if your app is doing catch-ups in batches, it can find the last timestamp it saw, then read all the lines up to 'current' and then go back to sleep for the next batch. you never lose data and every data is timestamped, regardless of where or when the client finally gets around to reading it. |
another comment on timestamps; there are usually many sub-steps that are done in order to complete a full data 'row' fetch from a device. you get the request from the user (or a timer) and that's a timestamp. you send a request to the remote agent. it receives it. it queues it and asks its 'bus' for a hardware reply. the reply finally arrives in the agent. it sends it over the network. we finally get it. etc etc. the point is that its useful to know as many timestamps as there are steps. this can help with debugging and performance tuning. you can always discard data you dont care about, later; but you can't recreate data you NOW want, that you didn't previously store ;) and so ideally, it would be great to timestamp the very source of the data (the meter). sadly, the meter never gives tuples of (time, value) so we have to timestamp it. the BLE stack is the first thing that touches the data that comes back from the meter. that would be the best place to grab the current time and send that along with the measurement to any client layers. if you cant (or dont want to) intercept things early in the BLE stack, then getting the timestamp from the caller of the BLE api is the next best thing. if you want to, you could try to measure the latency of the BLE stack and then subtract that out. but again, this is a hobbyist meter and we are likely trying to make it into something it really wont ever be. I have code that parses SCPI data from flukes and agilents and again, the older ones dont have timestamps so the best you can do is either first char that comes in from rs232 or the last char. the last char is easier since its just the last '\n' in the scpi packet and you know the whole data line is there and ready to 'action'. |
I'm adding some code (testing it now) to support both sender and listener sides, using mqtt. the plan is to run one instance of this app in background and it would relay all received DMM events as mqtt messages. the user can then run the app, this time in 'give me the last good data line' mode and you call that when you need that 'tail -f' kind of feature to just grab the most recent valid reading. since this method would put the sender into a broadcast mqtt mode, you can have multiple listeners. one can consume all the events and log them to disk; or graph them, or just 'give me the last value'. and only 1 sender process that just keeps up with all received BLE events and sends them 'hot potato' to mqtt. that should be an acceptable transfer latency and no banging on the local disk just to get the last valid measurement. |
On the topic of reading notifications with another python module then those listed above,this time with
|
I'll have to give that a try. I'm undecided about converting the main decode code to python. reason is that I really want this to be running on esp32. if it runs on esp32 then you just need wifi and you can get values over the web REST interface. that totally removes any 'host' requirements. but I will try the python code. maybe one way to use it with the existing code it to kind of mimic the current cli command (that people seem to dislike). better yet: run the python in the background and use mqtt messages from the ble decoder (python side). on the decode side, it just has to register as an mqtt receiver and get all the messages the sender puts out. that would be a nice distributed architecture. |
I think make sense to use a separate thread that is python and just print the values in json.. and keep the decoder in PY as well. |
I just added some early beta code that will run ENTIRELY on esp32 ! Feel free to give it a try. I'm going to make it web/rest style, later; but for now it outputs to serial console with millis timestamp and ascii meter value. |
I am using owon_multi_cli via subprocess in Python. It works, thx for the code! But some things create headaches.
I currently use this without sudo and have no problems. Can I rely on not needing sudo in the future, or does it depend on other settings? If I needed sudo, the use of owon_multi_cli would become impossible for me due to the need for inclusion in bigger project. Could you explain when and why it could be required?
As I need it in Python, I am wondering whether it may now or in the future become available as a "Pure-Python" program?
I am using subprocess in this way:
Code is ok, but it runs for 1 ... 1.5 sec! This is long. Any option for making it shorter?
The text was updated successfully, but these errors were encountered: