Skip to content

Resolved issue around inability to evaluate and overflow in sigmoid. Also added a few lines that I missed in merge last night. #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 25 commits into
base: master
Choose a base branch
from

Conversation

xtr33me
Copy link

@xtr33me xtr33me commented Jul 23, 2018

Was always getting a profit of 0 when evaluating model. This was primarily due to a "Buy" never occurring and therefore agent.inventory was always empty. So I set it so the first iteration a buy will occur and then the model will pick up from there. I'm thinking that in a future adjustment to this, we can infer the best time to buy in to the model based on the sliding window, or perhaps another means. For now, this at least allows for the evaluation to occur on other datasets.

Sigmoid was also overflowing when gamma (x) was higher than what math.exp could handle which on my system was around 700. The implementation used was one with which I found on SO.

@madytyoo madytyoo mentioned this pull request Aug 2, 2018
xtr33me and others added 3 commits August 6, 2018 10:23
…ill have to build on this further once I have a better understanding of the keras Tensorboard implementation
…wdown. Will have to find a better way of graphing for keras in the future.
@alanyuwenche
Copy link

Thanks for your sharing.

This modification can tackle no profit due to a "Buy" never occurring. But I don't think it really solve the core problem: why can't a trained agent take a proper(buy) action even in the trained data?
I found this problem because of building a sell agent(attached code). Originally, the agent must have a “Buy” position, and then it can take “Sell” action. Likewise, we can easily modify the code to build an agent that it must have “Sell” position first. But I tried many efforts, the agents only take “Buy” action even if I forced it to sell at the first time. No matter how it shows good performance during training processes, it seems not to transfer to evaluation process.

If we can make it work, this example actually shows how we can deal with “Environment”. It is usually quite difficult to model environment in financial markets.
agent_sell.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants