-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IDEA] Display last sent datetime on dashboard #6022
Comments
This is a good feature idea. While you wait on the feature to be implemented, I would like to suggest a workaround - You can use dynamic SQL to query all Mirth channels at once. You want to query The dynamic SQL lets you do this in one hit. You still have a load but its all in the DB. An example of how to do this in Postgres is https://gist.github.com/jonbartels/b961574b2043b628f1b0fd96f440179b A reference for the DB schema is at http://mirthschema.diridium.com/ |
I have something like that: In case of interest I can share the code. However, there is not yet a fancy configuration frontend and it will take some time as it was quite some time ago when I created it & I would have to find some time to bring it in a shareable state. |
@odoodo I would be interested. The system we are using we built about 20 years ago (we've upgraded parts over the years, but still very old). Add to that, that the original developer, of course, is no longer here. We are going to rebuild it to be more modern, cohesive, and mirth specific once we are finished migrating from cloverleaf to mirth. So we would very much be interested in seeing how someone else does it before we start our effort. |
@jonbartels very cool. I tried this in our test environment, and it was nice and quick. However, it ran over a minute and a half in production, and so I cancelled it. Do I need to do something to the db in prod to make it more performant? (this could be why we are having other issues in production) |
I run this in PROD routinely and its fast enough and no impact for my deployments. A challenge with reporting on MC is that the tables are lightly indexed. This helps Mirth because it is INSERT/UPDATE heavy but hurts reporting. It could also be slow because of general DB tuning, size, I/O etc. I can't really advise on that. Do you understand the strategy the query uses to adapt it to get your "last activity time" report? It might be fast since your use case is a lot less data to chew through. |
I commented on your gist with my code, both our codes are slow enough that I cancelled them without seeing how long they actually execute, but it was over 1 minute and a half. |
https://gist.github.com/jonbartels/b961574b2043b628f1b0fd96f440179b#gistcomment-4794607 ooh yea I see your update. Performance on that is going to be tough because PG will have to do a table scan. Since the Primary Key is sequential and indexeed change your query to use |
I think It's ok to be close enough, most of our channels are pretty active. I will look to tweak that Monday, but I have a feeling it'll still be slow, given that yours never completed either. |
How many channels do you have, and message volume? is it a small box? |
And I took that personally :P :P . I like tuning DB queries and I took this as a fun challenge on Friday at 3:30PM :D I ran your query and just got the contents of
Then for one of my largest tables on this system (approx 100k messages) I extracted the single query for that table and got the EXPLAIN plan:
On this box it yielded a cost of about ~6000. Then this simplified version which orders by message_id and leverages an INDEX scan:
Has a cost of 0.46. This is a LOT more efficient because it can leverage an index scan. The most recent message_id is, by definition, the most recently received message to the channel. Now the caveat - UNION ALL doesn't play nicely with order by unless you add some parenthesis:
This ran in 30ms on my test DB. Yours took 90ms. I think this will be VERY efficient even on larger PROD DBs with lots of messages or suboptimal tuning since it leverages the index on message_id |
Modified query runs faster in my prod environment as you predicted, now i just need to re-adapt it to my needs |
I would do all this calling the API for transportablilty between backend DBs... but anyway see #6033, let's get generic about the ask on columns. |
I assume your talking about a future state? At the moment you cannot get the last activity quickly/easily for all channels, or am I missing it? The query we were working on is a stop gap, to improve what I currently have that is way to slow and intensive. Yes the ideal in my mind is to come back with the dashboard stats, and the last activity be written when counts are incr. With your ticket #6033 you would need to update the last activity in the channels code right? Or is that already a $gc var? |
@jonbartels Its funny my first query did use limit and order so it was more performant, but I looped over all the channels and made an individual query per channel, which is what made it so slow. Thanks for all the help and Im hoping this will take some strain off the system. We were getting some errors occasionally in mirth around the every 15-minute cron job because of how intensive it was. |
You can get the last message ID per channel then go get that message information by the explicit message ID. It's not as fast as calling the DB directly of course. In our case we already cache a list of channels and their stats in $g for other alerts reasons, so iterating over that object and calling the rest API is a simple add. There are other ways of course, e.g. in a post processor per channel populate a $g or $gc var with the last activity, then use those objects as the last activity timestamp. Not the greatest approach if you undeploy and deploy frequently. But a distinct advantage is that this is an in-memory construct and very fast to parse. The feature request I linked to is an ask to make generic the ability to add columns to the dashboard to handle whatever use case you need. |
Yeah, I like the ability to add any gc/g value, the biggest annoyance is remembering to do it, and ensuring all devs follow the standard, then again, if its on the dashboard itll jump out at you quicker. |
I don't think that gc/g values are even available in the API are they? My understanding was that the client uses the APIs internally to surface values so before gc/g values could be displayed they'd need to be made available via API. |
I guess that last comment was covered in part with #6033 |
The $gc and $g vars are avaiable because they already show up on the botton of the screen. And even the context is available since selecting a channel shows its $gc vars. So the ask is just the bubble those up on demand as dashboard columns and let ius all have at it for what we would like to put on a dashboard. |
Interesting. I hadn't seen $g and $gc available in any of the APIs. I'll have to do some more digging to see which endpoints are responsible for surfacing that data at the bottom of MCA. |
I've updated the title and request body to more clearly state I'm interested in the last sent datetime, and it needs to be at the connector level. Over the past couple of days, I've tried several solutions. We can write a performant query at the channel level, but at the connector level, even with added indexes, we can't get it to be performant. Throwing something in $g or $gc would work potentially, but requires code to be maintained on every connector, which is less than ideal. For now, I will probably tweak my process to run continuously, with pauses between queries, instead of running it all on a custom API I wrote, that does it all synchronously on demand. This is just too much work to do without caching. We get errors like |
From my point of view, this is the way to go. No, it is not necessary to maintain connector-specific code. |
Can you clarify your comment? I'm not sure what your saying. What is the way to go? $gc? And if using that, how would it not be code on every connector? |
Sure, please find enclosed an implementation that records the time and type of the last event for each connector: How does it work?
How to install
Download Additional Info As I wrote above, I'll finally share the code in case of interest but I first have to find time to review it to assure everything is in a proper state and does not contain sensitive information. |
@odoodo dumb question, but can you post the lines where you call these methods. I've never gotten processing state etc. Update: Never mind, I see it in the Channel Hook code. |
Yeah, no need to call anything specific. Just assure that there is anything in the global postprocessor. This automatically executes the channel hook template when a message was processed by any channel. |
Here is our code template, it takes 0.04837 milliseconds to run, so very performant:
And our postprocessor
Then I have a cache/pruner channel that Im still working on, which just seeks to clean up the globalMap variable when channels or connectors are deleted, and write the json to disk so that way when we restart mirth, it doesnt need to accumulate all the times again, it starts from the cache. This runs every 5 minutes, and takes less than a second to run. Next I have a custom api that will delete the cache and the in memory object. And lastly, I have a custom api that returns the globalMap as json. It does transform the json to be in a nicer format for consumption. We are currently getting a concurnecy issue, writting to the globalMap which I have a ticket in about.
So if anyone has ideas there, that would be awesome. Note: |
One caveat with using the global post-processor is that it will not value any sent dates when the destination is set to queue always. Unfortunately, this is typically what we do. I'm assuming because once it's added to the queue a different CPU thread handles the sending of the transaction, and therefore its asynchronous. Is there any post-processor script that runs after a destination sends a message, our would we need to call a code template on every destination that is set this way, in order to log the last sent date. |
Is your feature request related to a problem? Please describe.
We have built monitoring around our integrations, and part of this is to see when an interface last sent a message. Certain channels can be inactive for 4 hours and that's ok, other interfaces expect activity within every 15 minutes. To accomplish this, we have the layer to match our thresholds with the last sent date. To get the last sent date time, I need to use the database or an API like
/channels/{channelId}/messages
. In both cases, I need to make a query per channel. We are currently at 228 channels, but this will increase over time.This is a relatively easy win, but would greatly improve our support for our customers.
Describe your use case
We use our monitoring layer to be able to identify potential issues where we expect data to/from a connection, but the date isn't recent. Often we are more on top of our integration and support than our vendors. Our operations team can contact the vendor and resolve an issue prior to our customers noticing.
Describe the solution you'd like
When each transaction is written to the db, Mirth already increments the counts on the dashboard related to that channel. We are requesting an additional field, which would be the sent datetime. Then return that with the stats on api
/channels/statuses
. We need this to be at the connector level, because while the source may have received data recently, it could be filtered out before reaching a destination.Describe alternatives you've considered
Currently, I've written my own API, that first gets a list of channels and then loops over each channel to get the last message with a status of q(ueued), s(ent), or t(ransformed), and then build a list of channel ID to sent datetimes. I do believe I would be satisfied with JUST the sent, but maybe you have a datetime for each status? Im not sure if others need those other values, I dont personally. It's way more intensive than what it needs to be. We run this every 15 minutes and while it doesn't take too long (takes a couple of minutes), pulling a dashboard stat would be much faster and less of a load on the server.
The text was updated successfully, but these errors were encountered: