Skip to content

Spark Roadmap 2018 #571

Open
Open
@amotenko

Description

@amotenko

Spark Roadmap 2018

After getting a system up and running in a very short time and under many constraints it’s time to give Spark the spa treatment it deserves. If so 2018 has been nominated as the year of the quality phase.

Extend and generate seeding and testing data

The current data written for the knex seeding mechanism is very thin and insufficient for development purposes. This data should be extended to include multiple normalized entities that enable experimentation with various use cases and scenarios to streamline development.
The same seeding mechanism can be used to generate data for future tests and unit tests.
In addition, a solution is required for generation of coherent dummy data for the staging environment.

Resources

http://knexjs.org/#Seeds-CLI
http://faker.js

Improvement of Spark API

Spark was initially conceived as an API First application but sadly the necessities of real life forced development to drift away from rules of good API design. The Spark API could benefit from being more consistent, modular, having a natural syntax and better documentation. Apart from making it easier to develop for the Spark API, extensively dogfooding will make Spark itself better as well.

Resources

http://swagger.io
https://www.oreilly.com/ideas/an-api-first-approach-for-cloud-native-app-development
https://medium.com/adobe-io/three-principles-of-api-first-design-fa6666d9f694

Use of middleware

At the moment Spark is a bit of a mess from the point of view of logging, authentication, role management, and error handling. Some of those pains could be alleviated by correct use of a middleware such as the mechanism provided by express.

Resources

http://expressjs.com/en/guide/using-middleware.html

Code style and conventions

Spark as an open source code project is a big mess of styles and conventions. As extensively discussed it doesn’t really matter which style is picked as long as it is followed consistently. Areas of improvement include: Project structure, naming conventions, ECMAScript conventions and reduction of third party package dependencies. Any such convention that is adopted should be enforced by automatic tools such as ESLint where possible.

Resources

https://github.com/felixge/node-style-guide
https://stackoverflow.com/questions/18927298/node-js-project-naming-conventions-for-files-folders
https://github.com/airbnb/javascript

Testing

Obviously more integration tests, unit tests, selenium tests would do only good. Also developers are more likely to add new tests to fresh code if it’s easy to do, so anything that can be done on that front is also a good idea.

Spark as a CMS

In the long run it is expected that more and more content is going to be managed by Spark. This includes both content generated by admins and content generated by users and managers. If developers are required to produce commits every time content needs to be altered this goal will not be sustainable so integration with a CMS platform is needed.

Resources

http://keystonejs.com

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions