Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V2 Architecture proposal (continuation of #111) #150

Closed
cgewecke opened this issue Nov 13, 2017 · 14 comments
Closed

V2 Architecture proposal (continuation of #111) #150

cgewecke opened this issue Nov 13, 2017 · 14 comments

Comments

@cgewecke
Copy link
Member

cgewecke commented Nov 13, 2017

Since @area and @onbjerg's discussion of an overhaul in #111 I've done some work on a gas analytics mocha plugin for Truffle and believe a similar pattern for solidity-coverage might resolve some of the technical challenges noted there.

Third-party mocha reporters provide hooks for every part of the test cycle. Operating within one would allow us to escape the issues presented by Truffle's chain reversion. Reporters also run inside Truffle's execution context so the web3 and artifacts objects are globally exposed there.

In this model SC might look as follows:

Preparation

  • Using abi-decoder, create a map of function signatures to file names / paths. (Example here)
  • Using the Context object developed for the truffle debugger, create a program-counter to line-column position map for each file.
  • Using logic @area wrote as the core of the current implementation, generate an AST based map of 'points of interest' to line-column position data for each file.

Runtime: For each test, block, transaction:

  • identify the file being run by extracting the function signature from the tx's input field and locating it in the table we've constructed with abi-decoder.
  • call debug_traceTransaction and collect the opcode/program counter data for the tx.
  • discover and record the meaning of the executed instructions via the file's Context and AST maps.

Some issues to consider:

  • assert and require will need special processing. We can easily identify regular conditional branches because they occupy different physical space in the text files. But branching for these built-ins will have to be detected in another way.
  • view and pure: We could pre-process the files by stripping these modifiers off and running all methods as transactions. A little hacky and means that explicit uses of the .call postfix will not get covered. It might be nice to press truffle and testrpc to implement tracing for .call since it would be useful for their debugger anyway. Not sure how to actually do this though.
  • We will need to throw away our test suite and find a new way of validating the tool.
  • We will be a Truffle plugin and look like this:
module.exports = {
  networks: {
    ...etc...
  },
  mocha: {
    reporter: 'solidity-coverage'
  }
};

Overall benefits:

  • No forks or special deps. Even parsing. We should use the solc AST provided by the Truffle artifact because it guarantees the column position data is identical, among other things.
  • Resolution of most bugs.
  • Long term stability, close integration with solc etc.

Please rip this apart. Still blue-skies as far as I am concerned.

cc: #146

@area
Copy link
Contributor

area commented Nov 14, 2017

Losing the ability to spot when .call()s have been made would be my biggest objection to an approach like this. I agree being more closely integrated would be a huge benefit though, but I don't (personally) think it's worth it for loss of coverage...

@cgewecke
Copy link
Member Author

@area Yes, it's a big problem. The thing is that it's not clear any approach apart from event injection provides a way to cover calls. For example if we had the vm object and could listen to it step we would collect call execution data but have no deterministic way of associating it with a specific file because the step only has the opcode and program counter.

Do you have any ideas about how this might be done?

@cgewecke
Copy link
Member Author

cgewecke commented Nov 14, 2017

Thought about this a little more and one possibility is to identify.calls by parsing the text files and flagging them in our program-counter to source maps. Then listen to the vm step and try to match that way?

Highly speculative but maybe we could piggyback on a proposal to implement websockets at testrpc (necessary for web3 1.0 anyway) and ask for / add the ability to subscribe to the vm step as an all-purpose trace.

@cgewecke
Copy link
Member Author

cgewecke commented Nov 14, 2017

Tangentally: Remix has some great code for handling the solc ast / doing source-map stuff. Eminently steal-able.

@area
Copy link
Contributor

area commented Nov 15, 2017

The thing is that it's not clear any approach apart from event injection provides a way to cover calls. For example if we had the vm object and could listen to it step we would collect call execution data but have no deterministic way of associating it with a specific file because the step only has the opcode and program counter.

Do you have any ideas about how this might be done?

Unless I'm misunderstanding what you're saying, this must be possible, otherwise something like Remix wouldn't be able to step through inherited contracts, right?

@cgewecke
Copy link
Member Author

It's more likely that I am misunderstanding something, but the case I'm thinking of is a suite of tests where there are contracts that interact but don't inherit from each other. Aren't their program counters separate? When listening to the step, how would we discern which program was running?

For example suppose I have a token and an application which consumes it. In my tests I would frequently run variants of app.consumeToken() and then token.balanceOf.call() and app.whatHappened.call().

Another wrinkle in the plan above is that the abi-decoder strategy for identifying file context is inadequate. The Ethereum docs for the ABI define the function signature as follows:

The first four bytes of the call data for a function call specifies the function to be called. It is the first (left, high-order in big-endian) four bytes of the Keccak (SHA-3) hash of the signature of the function. The signature is defined as the canonical expression of the basic prototype, i.e. the function name with the parenthesised list of parameter types. Parameter types are split by a single comma - no spaces are used.

I think this means unrelated contracts with methods that have identical signatures will need a second criteria to differentiate them as well.

@cgewecke
Copy link
Member Author

cgewecke commented Nov 15, 2017

Update - dug into this a bit and actually the step has the contract address at .address (it's a Buffer). Was a little confused by the docs there unfortunately. So that's really really helpful - as long a we can capture deployments and associate them with the build binaries we know the execution context.

@area
Copy link
Contributor

area commented Nov 16, 2017

Tests can generate contracts outside of those deployment steps though, which wouldn't be captured. I think we'd have to look at the code at the address and figure out which contract it related to.

@cgewecke
Copy link
Member Author

cgewecke commented Nov 16, 2017

Yes so the algo would be:

  1. Collect all the deployed addresses from the artifacts object, associate with name.
  2. For each contract creation tx, get address - regex match code (or input) to artifact binary and associate with name. (A little bit of trickiness in here because Truffle's binaries sometimes have weird link placeholders for Libraries, but definitely doable)

We have a catch-22 though: To map contract addresses to binaries we need the artifacts object, to have the artifacts object we have to sneak into Truffle's execution context as a reporter. In there it seems difficult to launch testrpc / get the vm ourselves - it's too late. Can you think of a way around this?

Some possiblities:

  • Ask truffle to globally expose the vm here (this would lock us into the develop context).
  • Ask testrpc to implement websockets and add an endpoint that allows anyone to subscribe to the vm step.
  • Somehow reproduce truffle's artifacts implementation ourselves, and don't sneak in.

@davesag
Copy link

davesag commented Nov 17, 2017

Tests can generate contracts outside of those deployment steps though, which wouldn't be captured. I think we'd have to look at the code at the address and figure out which contract it related to.

Yes. Many of my contracts spawn new contracts themselves and are not deployed during Truffle's initial migration.

Here's an example.

const { getContract } = require('../utils/txHelpers')

const Stigmata = artifacts.require('./Stigmata.sol')
const PalmerEldritch = artifacts.require('./PalmerEldritch.sol')

/**
 *  Stigmata contracts are created by PalmerEldritch contracts
 *  and share a common owner
 */
contract('PalmerEldritch', () => {
  let owner
  let eldritch
  let stigmata

  before(async () => {
    eldritch = await PalmerEldritch.deployed()
    owner = await eldritch.owner()
    const tx = await eldritch.createStigmata({ from: owner })
    stigmata = getContract(tx, 'StigmataCreated', 'stigmata', Stigmata)
  })

  context('Stigmata', () => {
    const expected = 3

    it('count is 3', async () => {
      const count = await stigmata.count()
      assert.equal(count, expected)
    })
  })
})

@cgewecke
Copy link
Member Author

Closing. Starting new repo in the org: v2 to pursue this approach. ganache-core is open to receiving PR for a websocket subscription to the vm step, so this seems viable. Third-party reporter architecture with it's event hooks will still be necessary for end-running revert as we collect new contract creation events from the blocks. Also gives us a hook for detecting the end of test execution and access the global artifacts object.

@cgewecke
Copy link
Member Author

cgewecke commented Mar 2, 2018

Engineers at 0xProject are pursuing a variant of this here

@cgewecke
Copy link
Member Author

cgewecke commented Mar 10, 2018

Have been thinking about this and think I will re-open here. The first step we should take is to develop a PR that deprecates solidity-parser-sc and uses the new solc ast available on the Truffle artifact. This will allow us to validate that piece of the work against our E2E CI target, and we'd be able to port it over to V2 with a high degree of confidence. Not even necessary to merge this PR - it could be purely experimental.

Second: was fortunate enough to have a long conversation this week with @seesemichaelj who is working on a debugger for Augur and TLDR; what we've proposed in this thread is computationally intensive. We have to be very conservative about how many Websocket calls we make and try to be smart about how we tick off hits in the AST. His effort suggests that one design possibility is to locate work at Ganache (i.e. we would (probably) have a fork still). Another is to pass over the opcodes block by block and run all of our processing async, so that it gets done while the tests run.

Finally, the work mapping the solc sourceMap to actual source files is non-trivial. There's lots of weirdness and cases there - issues around optimized opcodes being uncorrelated with files, detecting runtime execution of contracts that are 'newed' within solidity, etc.

Anyone interested in these topics? Weigh in.

I also sent a letter on Gitter to Leon Logvinov inviting him to pursue his work here or in some other organization that might be like a group of engineers who are generally interested in pursuing real-time trace analytics open-source tools. Did not receive response but he's quite busy so that's understandable :)

Re-opening.

@cgewecke
Copy link
Member Author

Closing in favor of #345.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants