Addressing syncing issues #3673
Description
The Mist team is very aware of how troublesome it's been to fully sync a node recently. The intention of this issue is to introduce some more transparency into what we've been working on to address this.
TL;DR
In a coming Mist release, you will be able to connect to the network immediately. This means you'll be able to see your balances and send transactions without waiting.
Technical Details
We're working to introduce "layered nodes." With this layered node architecture, Mist will immediately connect to a remote node, hosted by our friends over at Infura. Meanwhile, in the background, your local geth node will continue to sync. When your local node finishes syncing, Mist will begin pointing web3 calls to your local node instead of the remote node.
Big Picture
Infura provides an amazing service, but it is important to the health of the ethereum network that many individuals run their own nodes. For that reason, Mist will always promote doing so. We understand, however, that a node is becoming prohibitively large and expensive for many machines to run locally. We will likely offer an option to operate Mist purely on the remote node to accommodate those users.
This work is a "small" piece of a larger refactoring we're working toward. Part of that plan is to offer more configuration options to users. That may include decoupling Mist from geth to allow users to select the client of their choosing. Of course, we enjoy working closely with the geth team, but it is in Mist's best interest to be flexible enough to survive any single client having issues.
A Note on Geth
We should mention: the geth team is also working very hard to reduce the resources required to run your own node. This is no simple task, particularly with how popular the network has become. You can look forward to some exciting improvements in the next big release, 1.8. See some teasers from Peter here and here.
Timeline
There will probably be at least one small release before this major feature is introduced. We have a working proof-of-concept, though, and are currently working through the very many edge cases.