Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP: to be split up] Improving readability and reliability: cross-refs, hyperref, grammar, clarifications, extra references, URLs added for existing refs; list of maths symbols; fixed formatting e.g. subscripts to \mathrm font; archive links; readme: omitted yellowpaper.io and added more details e.g. on how to build. #401

Open
wants to merge 147 commits into
base: master
Choose a base branch
from

Conversation

jamesray1
Copy link
Contributor

@jamesray1 jamesray1 commented Jan 6, 2018

This is in the process of being split up, much has already been merged. Scroll down to the bottom to see the latest bookmark in the difference, after which still needs to be merged. To avoid merge conflicts, I will wait until existing PRs have been merged or closed, then make one PR at a time.

The following checklist is outdated.

Replaces #376.

  • >94 total commits. >7 split off. List of splitted commits (details are below): 1-3, 4, 5, 6, 7, 8, 10, 11, 21.

Aug 10 2017

Sep 12

Sep 16

Sep 19

  • 11: see 7. Hyperref. 3feff95. Superceded by 21.
  • 12:
  • 13
  • 14

Sep 21

  • 15.
  • 16
  • 17

Sep 22 (8)

Sep 23 (9)

  • 26.
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

Sep 24 (4)

Sep 26 (3)

  • 39:
  • 40
  • 41

Sep 28

  • 42:

Sep 29 (4)

  • 43:
  • 44
  • 45
  • 46

Sep 30 (4)

  • 47:
  • 48
  • 49
  • 50

Oct 1 (9+5 = 14)

  • 51:
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64

Oct 2 (14)

  • 65:
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 78

Oct 5 (9)

  • 79:
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87

Oct 6 (5)

  • 88: Oct 6 as above for the commit 8 point 54036a3.
  • 89: Oct 6 as above for the commit 8 point e55a5f6.
  • 90
  • 91
  • 92

Oct 14

  • 93

Oct 15 2017

jamesray1 and others added 30 commits August 10, 2017 11:27
"Another way is to use special field url and make bibliography style recognise it.

url = "http://www.example.com"

You need to use \usepackage{url} in the first case or \usepackage{hyperref} in the second case."

Source: https://en.wikibooks.org/wiki/LaTeX/Bibliography_Management#Authors
For the added refs, see ethereum#335
For Appendix F:
- ECDSAWikipedia and ECDSAcerticom for "we assert the functions ECDSASIGN, ECDSARESTORE and ECDSAPUBKEY. These are formally defined in the literature." ethereum#335 (comment)
- secp256k1BitcoinWiki2016 and secp256k1StackExchange2014 for Secp256k1. ethereum#335 (comment)
- npmElectrum2017 for Electrum style signatures. ethereum#335 (comment)
This does need to be added, but commenting out for now as I'm not sure how. Will look into it later.
To avoid complexity with adding a footnote in a math environment.
Formula 210 has ECDSARECOVER. For consistency, I think ECDSARESTORE should be changed to ECDSARECOVER. CTRL+F ECDSARECOVER returns 5 results, while ECDSARESTORE only returns one, where it is mentioned that it is asserted with "We assert the functions ECDSASIGN, ECDSARESTORE and ECDSAPUBKEY", but then ECDSASIGN, ECDSARECOVER and ECDSAPUBKEY are asserted.
Prepended 'the' before countable definite articles, 'an' before indefinite articles, rewording to avoid stringing too many nouns in succession, etc. Added the top/first word/byte, e.g. instead of "Save word to memory", changed to "Save the first word to memory" (implying the first word of the stack as per the following formula.
Complete sentences, NOT operation
For "Missing $ inserted" in the failed Travis build starting from 098adb8 (where the citation to the URL was introduced), I found this.
To prevent builds failing that have underscores in URLs.
To get URLs to break at the end of a column.
\end{equation}

Where $H_{\hcancel{n}}$ is the new block's header but \textit{without} the nonce and mix-hash components; $H_n$ is the nonce of the header; $\mathbf{d}$ is a large data set needed to compute the mixHash and $H_d$ is the new block's difficulty value (i.e. the block difficulty from section \ref{ch:ghost}). $\mathtt{PoW}$ is the proof-of-work function which evaluates to an array with the first item being the mixHash and the second item being a pseudo-random number cryptographically dependent on $H$ and $\mathbf{d}$. The underlying algorithm is called Ethash and is described below.
Where $H_{\hyperlink{h cancel n}{\hcancel{n}}}$ is the new block's header but \textit{without} the nonce and mix-hash components; $\hyperlink{H n}{H_{\mathrm{n}}}$ is the nonce of the block header; $\mathbf{d}$ is a large data set needed to compute the mixHash and $\hyperlink{H d}{H_{\mathrm{d}}}$ is the new block's difficulty value (i.e. the block difficulty from section \ref{ch:ghost}). $\mathtt{PoW}$ is the proof-of-work function which evaluates to an array with the first item being the mixHash and the second item being the \hyperlink{block nonce}{block nonce}, a pseudo-random number cryptographically dependent on $H$ and $\mathbf{d}$. The underlying algorithm is called Ethash and is described below.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just linked previously, not necessary.

\subsubsection{Ethash}
Ethash is the PoW algorithm for Ethereum 1.0. It is the latest version of Dagger-Hashimoto, introduced by \cite{dagger} and \cite{hashimoto}, although it can no longer appropriately be called that since many of the original features of both algorithms have been drastically changed in the last month of research and development. The general route that the algorithm takes is as follows:
Ethash is the PoW algorithm for Ethereum \textit{Frontier} and \textit{Homestead}. It is the latest version of Dagger-Hashimoto, introduced by \cite{dagger} and \cite{hashimoto}, although it can no longer appropriately be called that since many of the original features of both algorithms were drastically changed with R\&D from February 2015 until May 4 2015 (\cite{commitdateforEthash}). The general route that the algorithm takes is as follows:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

title = "Mist release 0.8.0",
}

@misc{commitdateforEthash,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Mining involves grabbing random slices of the dataset and hashing them together. Verification can be done with low memory by using the cache to regenerate the specific pieces of the dataset that you need, so you only need to store the cache. The large dataset is updated once every $J_{epoch}$ blocks, so the vast majority of a miner's effort will be reading the dataset, not making changes to it. The mentioned parameters as well as the algorithm is explained in detail in appendix \ref{app:ethash}.
Mining involves grabbing random slices of the dataset and hashing them together. Verification can be done with low memory by using the cache to regenerate the specific pieces of the dataset that you need, so you only need to store the cache. The large dataset is updated once every \hyperlink{Jepoch}{$J_{\mathrm{epoch}}$} blocks, so the vast majority of a miner's effort will be reading the dataset, not making changes to it. The mentioned parameters as well as the algorithm is explained in detail in appendix \ref{app:ethash}.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -1117,21 +1182,21 @@ \subsection{Data Feeds}
The general pattern involves a single contract within Ethereum which, when given a message call, replies with some timely information concerning an external phenomenon. An example might be the local temperature of New York City. This would be implemented as a contract that returned that value of some known point in storage. Of course this point in storage must be maintained with the correct such temperature, and thus the second part of the pattern would be for an external server to run an Ethereum node, and immediately on discovery of a new block, creates a new valid transaction, sent to the contract, updating said value in storage. The contract's code would accept such updates only from the identity contained on said server.

\subsection{Random Numbers}
Providing random numbers within a deterministic system is, naturally, an impossible task. However, we can approximate with pseudo-random numbers by utilising data which is generally unknowable at the time of transacting. Such data might include the block's hash, the block's timestamp and the block's beneficiary address. In order to make it hard for malicious miner to control those values, one should use the {\small BLOCKHASH} operation in order to use hashes of the previous 256 blocks as pseudo-random numbers. For a series of such numbers, a trivial solution would be to add some constant amount and hashing the result.
Providing random numbers within a deterministic system is, naturally, an impossible task. However, we can approximate with pseudo-random numbers by utilising data which is generally unknowable at the time of transacting. Such data might include the block's hash, the block's timestamp and the block's beneficiary address. In order to make it hard for malicious miner to control those values, one should use the {\small \hyperlink{blockhash}{BLOCKHASH}} operation in order to use hashes of the previous 256 blocks as pseudo-random numbers. For a series of such numbers, a trivial solution would be to add some constant amount and hashing the result.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Blockchain consolidation could be used in order to reduce the amount of blocks a client would need to download to act as a full, mining, node. A compressed archive of the trie structure at given points in time (perhaps one in every 10,000th block) could be maintained by the peer network, effectively recasting the genesis block. This would reduce the amount to be downloaded to a single archive plus a hard maximum limit of blocks.
Blockchain consolidation could be used in order to reduce the amount of blocks a client would need to download to act as a full, mining, node. A compressed archive of the trie structure at given points in time (perhaps one in every 10,000th block) could be maintained by the peer network, effectively recasting the \hyperlink{GenesisBlock}{genesis block}. This would reduce the amount to be downloaded to a single archive plus a hard maximum limit of blocks.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


\section{Future Directions} \label{ch:future}

The state database won't be forced to maintain all past state trie structures into the future. It should maintain an age for each node and eventually discard nodes that are neither recent enough nor checkpoints; checkpoints, or a set of nodes in the database that allow a particular block's state trie to be traversed, could be used to place a maximum limit on the amount of computation needed in order to retrieve any state throughout the blockchain.
The state database won't be forced to maintain all past state \hyperlink{trie}{trie} structures into the future. It should maintain an age for each node and eventually discard nodes that are neither recent enough nor checkpoints. Checkpoints, or a set of nodes in the database that allow a particular block's state trie to be traversed, could be used to place a maximum limit on the amount of computation needed in order to retrieve any state throughout the blockchain.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -1141,7 +1206,7 @@ \section{Conclusion} \label{ch:conclusion}

\section{Acknowledgements}

Many thanks to Aeron Buchanan for authoring the Homestead revisions, Christoph Jentzsch for authoring the Ethash algorithm and Yoichi Hirai for doing most of the EIP-150 changes. Important maintenance, useful corrections and suggestions were provided by a number of others from the Ethereum DEV organisation and Ethereum community at large including Gustav Simonsson, Pawe\l{} Bylica, Jutta Steiner, Nick Savers, Viktor Tr\'{o}n, Marko Simovic, Giacomo Tazzari and, of course, Vitalik Buterin.
Many thanks to Aeron Buchanan for authoring the \textit{Homestead} revisions, Christoph Jentzsch for authoring the Ethash algorithm and Yoichi Hirai for doing most of the EIP-150 changes. Important maintenance, useful corrections and suggestions were provided by a number of others from the Ethereum DEV organisation and Ethereum community at large including Gustav Simonsson, Pawe\l{} Bylica, Jutta Steiner, Nick Savers, Viktor Tr\'{o}n, Marko Simovic, Giacomo Tazzari and, of course, Vitalik Buterin.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has already been merged.


Finally, blockchain compression could perhaps be conducted: nodes in state trie that haven't sent/received a transaction in some constant amount of blocks could be thrown out, reducing both Ether-leakage and the growth of the state database.

\subsection{Scalability}

Scalability remains an eternal concern. With a generalised state transition function, it becomes difficult to partition and parallelise transactions to apply the divide-and-conquer strategy. Unaddressed, the dynamic value-range of the system remains essentially fixed and as the average transaction value increases, the less valuable of them become ignored, being economically pointless to include in the main ledger. However, several strategies exist that may potentially be exploited to provide a considerably more scalable protocol.

Some form of hierarchical structure, achieved by either consolidating smaller lighter-weight chains into the main block or building the main block through the incremental combination and adhesion (through proof-of-work) of smaller transaction sets may allow parallelisation of transaction combination and block-building. Parallelism could also come from a prioritised set of parallel blockchains, consolidated each block and with duplicate or invalid transactions thrown out accordingly.
Some form of hierarchical structure, achieved by either consolidating smaller lighter-weight chains into the main block or building the main block through the incremental combination and adhesion (through proof-of-work) of smaller transaction sets may allow parallelisation of transaction combination and block-building. Parallelism could also come from a prioritised set of parallel blockchains, consolidating each block and with duplicate or invalid transactions thrown out accordingly.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -1181,7 +1246,7 @@ \section{Terminology}

\item[App] An end-user-visible application hosted in the Ethereum Browser.

\item[Ethereum Browser] (aka Ethereum Reference Client) A cross-platform GUI of an interface similar to a simplified browser (a la Chrome) that is able to host sandboxed applications whose backend is purely on the Ethereum protocol.
\item[Ethereum Browser] (aka Ethereum Reference Client) A cross-platform GUI of an interface similar to a simplified browser (a la Chrome) that is able to host sandboxed applications whose backend is purely on the Ethereum protocol, which is known as Mist since 8 July 2016 (\citeauthor{Mist}).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#603, albeit modified.

year = "2017",
}

@misc{Mist,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -1195,6 +1260,7 @@ \section{Terminology}

\end{description}

\hypertarget{rlp}{}W
Copy link
Contributor Author

@jamesray1 jamesray1 Feb 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has already been merged, but modified.

\begin{eqnarray}
R_b(\mathbf{x}) & \equiv & \begin{cases}
R_{\mathrm{b}}(\mathbf{x}) & \equiv & \begin{cases}
Copy link
Contributor Author

@jamesray1 jamesray1 Feb 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged.

(192 + \lVert s(\mathbf{x}) \rVert) \cdot s(\mathbf{x}) & \text{if} \quad \lVert s(\mathbf{x}) \rVert < 56 \\
\big(247 + \big\lVert \mathtt{\tiny BE}(\lVert s(\mathbf{x}) \rVert) \big\rVert \big) \cdot \mathtt{\tiny BE}(\lVert s(\mathbf{x}) \rVert) \cdot s(\mathbf{x}) & \text{otherwise}
\end{cases} \\
s(\mathbf{x}) & \equiv & \mathtt{\tiny RLP}(\mathbf{x}_0) \cdot \mathtt{\tiny RLP}(\mathbf{x}_1) ...
\end{eqnarray}

}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged.

@@ -1275,12 +1341,13 @@ \section{Hex-Prefix Encoding}\label{app:hexprefix}

Thus the high nibble of the first byte contains two flags; the lowest bit encoding the oddness of the length and the second-lowest encoding the flag $t$. The low nibble of the first byte is zero in the case of an even number of nibbles and the first nibble in the case of an odd number. All remaining nibbles (now an even number) fit properly into the remaining bytes.

\hypertarget{trie}{}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged.

\section{Modified Merkle Patricia Tree}\label{app:trie}
The modified Merkle Patricia tree (trie) provides a persistent data structure to map between arbitrary-length binary data (byte arrays). It is defined in terms of a mutable data structure to map between 256-bit binary fragments and arbitrary-length binary data, typically implemented as a database. The core of the trie, and its sole requirement in terms of the protocol specification is to provide a single value that identifies a given set of key-value pairs, which may be either a 32 byte sequence or the empty byte sequence. It is left as an implementation consideration to store and maintain the structure of the trie in a manner that allows effective and efficient realisation of the protocol.

Formally, we assume the input value $\mathfrak{I}$, a set containing pairs of byte sequences:
\begin{equation}
\mathfrak{I} = \{ (\mathbf{k}_0 \in \mathbb{B}, \mathbf{v}_0 \in \mathbb{B}), (\mathbf{k}_1 \in \mathbb{B}, \mathbf{v}_1 \in \mathbb{B}), ... \}
\mathfrak{I} = \{ (k_0 \in \mathbb{B}, v_0 \in \mathbb{B}), k_1 \in \mathbb{B}, v_1 \in \mathbb{B}), ... \}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pirapira I'm not sure if keys and values are classified as arrays, what say you?

@@ -1315,7 +1382,7 @@ \section{Modified Merkle Patricia Tree}\label{app:trie}
In a manner similar to a radix tree, when the trie is traversed from root to leaf, one may build a single key-value pair. The key is accumulated through the traversal, acquiring a single nibble from each branch node (just as with a radix tree). Unlike a radix tree, in the case of multiple keys sharing the same prefix or in the case of a single key having a unique suffix, two optimising nodes are provided. Thus while traversing, one may potentially acquire multiple nibbles from each of the other two node types, extension and leaf. There are three kinds of nodes in the trie:
\begin{description}
\item[Leaf] A two-item structure whose first item corresponds to the nibbles in the key not already accounted for by the accumulation of keys and branches traversed from the root. The hex-prefix encoding method is used and the second parameter to the function is required to be $true$.
\item[Extension] A two-item structure whose first item corresponds to a series of nibbles of size greater than one that are shared by at least two distinct keys past the accumulation of nibbles keys and branches as traversed from the root. The hex-prefix encoding method is used and the second parameter to the function is required to be $false$.
\item[Extension] A two-item structure whose first item corresponds to a series of nibbles of size greater than one that are shared by at least two distinct keys past the accumulation of the keys of nibbles and the keys of branches as traversed from the root. The hex-prefix encoding method is used and the second parameter to the function is required to be $false$.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(\varnothing, 0, A^0, ()) & \text{if} \quad g < g_r \\
(\boldsymbol\sigma, g - g_r, A^0, \mathbf{o}) & \text{otherwise}\end{cases}
(\varnothing, 0, A^0, ()) & \text{if} \quad \mathbf{g} < \mathbf{g}_{\mathrm{r}} \\
(\boldsymbol\sigma, \mathbf{g} - \mathbf{g}_{\mathrm{r}}, A^0, \mathbf{o}) & \text{otherwise}\end{cases}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged with #605.

\end{equation}

The precompiled contracts each use these definitions and provide specifications for the $\mathbf{o}$ (the output data) and $g_r$, the gas requirements.
The precompiled contracts each use these definitions and provide specifications for the $\mathbf{o}$ (the output data) and $\mathbf{g}_{\mathrm{r}}$, the gas requirements.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged with #605.


For the elliptic curve DSA recover VM execution function, we also define $\mathbf{d}$ to be the input data, well-defined for an infinite length by appending zeroes as required. Importantly in the case of an invalid signature ($\mathtt{\tiny ECDSARECOVER}(h, v, r, s) = \varnothing$), then we have no output.
\begin{eqnarray}
\Xi_{\mathtt{ECREC}} &\equiv& \Xi_{\mathtt{PRE}} \quad \text{where:} \\
g_r &=& 3000\\
\mathbf{g}_{\mathrm{r}} &=& 3000\\
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged with #605.

\mathbf{d}[0..(|I_\mathbf{d}|-1)] &=& I_\mathbf{d}\\
\mathbf{d}[|I_\mathbf{d}|..] &=& (0, 0, ...) \\
\mathbf{d}[0..(|I_{\mathrm{d}}|-1)] &=& I_{\mathrm{d}}\\
\mathbf{d}[|I_{\mathrm{d}}|..] &=& (0, 0, ...) \\
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite, but see #606.

g_r &=& 15 + 3\Big\lceil \dfrac{|I_\mathbf{d}|}{32} \Big\rceil\\
\mathbf{o} &=& I_\mathbf{d}
\mathbf{g}_{\mathrm{r}} &=& 15 + 3\Big\lceil \dfrac{|I_{\mathrm{d}}|}{32} \Big\rceil\\
\mathbf{o} &=& I_{\mathrm{d}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above have already been merged #605, except with \mathrm{g} and I_{mathbf{d}} is unchanged.

\end{eqnarray}


\section{Signing Transactions}\label{app:signing}

The method of signing transactions is similar to the `Electrum style signatures'; it utilises the SECP-256k1 curve as described by \cite{gura2004comparing}.
The method of signing transactions is similar to the `Electrum style signatures' as defined by \cite{npmElectrum2017}, heading "Managing styles with Radium" in the bullet point list. This method utilises the SECP-256k1 curve as described by \cite{Courtois2014}, and is implemented similarly to as described by \cite{gura2004comparing} on p. 9 of 15, para. 3.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

abstract = "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard, and was accepted in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard, and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
}

@misc{npmElectrum2017,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


It is assumed that the sender has a valid private key $p_r$, which is a randomly selected positive integer (represented as a byte array of length 32 in big-endian form) in the range \hbox{$[1, \mathtt{\tiny secp256k1n} - 1]$}.

We assert the functions $\mathtt{\small ECDSASIGN}$, $\mathtt{\small ECDSARECOVER}$ and $\mathtt{\small ECDSAPUBKEY}$. These are formally defined in the literature.
We assert the functions $\mathtt{\small ECDSAPUBKEY}$, $\mathtt{\small ECDSARECOVER}$ and $\mathtt{\small ECDSASIGN}$. These are formally defined in the literature, \eg by \cite{ECDSAcerticom}.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

\begin{eqnarray}
\mathtt{\small ECDSAPUBKEY}(p_r \in \mathbb{B}_{32}) & \equiv & p_u \in \mathbb{B}_{64} \\
\ e & \equiv & \hyperlink{h T}{h(T)} \hypertarget{ECDSASIGN}{}\\
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

\mathtt{\small ECDSASIGN}(e \in \mathbb{B}_{32}, p_r \in \mathbb{B}_{32}) & \equiv & (v \in \mathbb{B}_{1}, r \in \mathbb{B}_{32}, s \in \mathbb{B}_{32}) \\
\mathtt{\small ECDSARECOVER}(e \in \mathbb{B}_{32}, v \in \mathbb{B}_{1}, r \in \mathbb{B}_{32}, s \in \mathbb{B}_{32}) & \equiv & p_u \in \mathbb{B}_{64}
\end{eqnarray}

Where $p_u$ is the public key, assumed to be a byte array of size 64 (formed from the concatenation of two positive integers each $< 2^{256}$) and $p_r$ is the private key, a byte array of size 32 (or a single positive integer in the aforementioned range). It is assumed that $v$ is the `recovery id', a 1 byte value specifying the sign and finiteness of the curve point; this value is in the range of $[27, 30]$, however we declare the upper two possibilities, representing infinite values, invalid.
Where $p_u$ is the public key, assumed to be a byte array of size 64 (formed from the concatenation of two positive integers each $< 2^{256}$) and $p_r$ is the private key, a byte array of size 32 (or a single positive integer in the \hypertarget{v}{aforementioned range). It is assumed that $v$ is the `recovery id', a 1 byte value specifying the sign and finiteness of the curve point; this value is in the range of $[27, 30]$, however we declare the upper two possibilities, representing infinite values, invalid.}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -1425,78 +1493,86 @@ \section{Signing Transactions}\label{app:signing}
A(p_r) = \mathcal{B}_{96..255}\big(\mathtt{\tiny KEC}\big( \mathtt{\small ECDSAPUBKEY}(p_r) \big) \big)
\end{equation}

The message hash, $h(T)$, to be signed is the Keccak hash of the transaction without the latter three signature components, formally described as $T_r$, $T_s$ and $T_w$:
\hypertarget{h T}{
The message hash, $h(T)$, to be signed is the Keccak hash of the transaction without the latter three signature components, formally described as $T_{\mathrm{r}}$, $T_{\mathrm{s}}$ and $T_{\mathrm{w}}$:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above has already been merged.

\end{equation}

The assertion that the sender of a signed transaction equals the address of the signer should be self-evident:
\begin{equation}
\forall T: \forall p_r: S(G(T, p_r)) \equiv A(p_r)
\end{equation}

\newpage
Copy link
Contributor Author

@jamesray1 jamesray1 Feb 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not always necessary, and not now, but could use \pagebreak[1].

$G_{blockhash}$ & 20 & Payment for {\small BLOCKHASH} operation. \\

%extern u256 const c_copyGas; ///< Multiplied by the number of 32-byte words that are copied (round up) for any *COPY operation and added.
$G_{\mathrm{zero}}$ & 0 & Nothing is paid for operations of the set {\small $W_{\mathrm{zero}}$}. \\
Copy link
Contributor Author

@jamesray1 jamesray1 Feb 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mathrm: #612. Up to here. @pirapira I'll wait until current PRs are merged before making further PRs, in order to avoid merge conflicts but also avoid getting too many notifications for people commenting on my PRs, compounded by a lag on me making changes to them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants