Skip to content

creativeai/tspipelines

Repository files navigation

Intro

to implement a basic output-only block, you should specify the output function (potentially returning a promise) and an output type.

output function can optionally return a promise instead of value, for async operations.

const Two = Block.extend({
    type_output: Number,
    output: function() { return 2 }
})

Inputs

there is a readinput function that outputs can use to pull the inputs.

const PlusOne1 = Block.extend({
    type_input: [['x', Number]],
    type_output: Number,
    output: function() { this.readInput('x').then((value) => value + 1) }
})

there is a shorthand for this,

const PlusOne2 = Block.extend({
    type_input: [['x', Number]],
    type_output: Number,
    output: (x) => x + 1
})

if the output function has arguments, and they match some input names, the platform will automatically pull the inputs for you before calling the output function.

Input ordering

const Plus_My = Block.extend({
    type_input: [['x', Number], ['y', Number]],
    type_output: Number,
    output: (x, y) => x + y
})

inputs of our blocks have ordering and naming as well, this is why we use arrays of touples to specify them

Executing, connections to literals

you can specify inputs when instantiating blocks

const plus1 = Plus(1, 2)

plus1.output().then((res) => assert.equal(res, 3))

or after instantiation

const plus2 = Plus()

plus2.output(1, 2).then((res) => assert.equal(res, 3))

you can specify arguments by name as well, and you can partially apply them

const plus3 = Plus({y: 2})

plus3.output({x: 1}).then((res) => assert.equal(res, 3))

all of these build this simple mini pipeline, let's render it (more on this later)

DotRender({code: DotCode({ block: plus3 }), file: './readme/plus.png'}).output()

plus

connecting with blocks

wherever we've connected these literals, we could have connected blocks as well

const plus_deep1 = Plus(1, Plus(2, 3))
plus_deep1.output().then((res) => assert.equal(res, 6))

we have the same input connecting flexibilities here:

const plus_deep2 = Plus(1)
plus_deep2.output({ y: Plus(2, 3)}).then((res) => assert.equal(res, 6))

render...

DotRender({code: DotCode({ block: plus_deep2 }), file: './readme/plusplus.png'}).output()

plus

First class blocks

const assert = require('assert')
const abstract = require('./dist/block/abstract')
const { Map, Reduce, Repeat } = abstract
const { Plus } = require('./dist/block/math')
const { RndInt } = require('./dist/block/random')
const { DotRender, DotCode } = require('./dist/block/graph')

Map

var mapper1 = Map([1, 2, 3], Plus({ y: 1 }))
mapper1.input // ⇨ { block: [ 1, 2, 3 ], list: Block(Plus) }

mapper1.output()
.then((x) => assert.deepEqual(x, [ 2, 3, 4 ]))

or

const mapper2 = Map({ block: Plus({ y: 1 }) })
mapper2.input // ⇨ { block: Block(Plus) }

mapper2.output({list: [1,2,3]})
.then((x) => assert.deepEqual(x, [2, 3, 4]))
.then(() => DotRender({code: DotCode({ block: mapper2 }), file: './readme/map.png'}).output())

map

Reduce

const reduce = Reduce({ list: [1, 2, 3], block: Plus(), start: 0 })
reduce.output()
.then((result) => assert.equal(result, 6))
.then(() => DotRender({code: DotCode({ block: reduce }), file: './readme/reduce.png'}).output())

map

Repeat (filters in ee would use this)

runs a block over and over some input and returns a list

const repeat = Repeat({
    block: Plus({ y: RndInt(0, 100)}),
    n: 5,
    input: 100
})

repeat.output()

returns something like,

[ 123, 173, 119, 164, 155 ]

Graphing

given we have blocks as first class pipeline types, we can have blocks that inspect blocks, these graphs above have been generated by a graphviz graphing pipeline

const ViewBlock = (block) => Open(DotRender(DotCode(block)))
ViewBlock(Plus(1, 2)).output()

Plugable parsers

not sure if this works atm (week old code) but we can have standard plugable pipeline spec parsers (yml, s-expressions etc)

import { parser } from './parsers/sexpression'
Pipeline = parser.parse('(plus 1 plus(2 3))')
Pipeline().output() // instantiate and run

Define Pipeline

you can take any block graph and create a new block class out of it, optionally with block arguments (block and list being arguments in this case)

const NewPipeline = definePipeline((block, list) => {
    return Flatten(Tuple([1, 2, 3], Map({ block: block, list: list })));
});

// use it in a standard way,

const pipelineInstance = NewPipeline(list: [1,2,3])
pipelineInstance.output(block: Plus({ y: 1 }))

Stream/Aggregate

developed for usage in the UI, to show intermediate processing results.

Stream takes a list, and outputs element by element when pulling

const stream = Stream([1,2,3])

stream.output() // 1
stream.output() // 2
stream.output() // 3
stream.output() // EndOfStream Exception

Aggregate pulls until it receives EndOfStream Exception

const aggregate = Aggregate(Stream([1,2,3]))
aggregate.output() // [1,2,3]

By putting blocks in between aggregate and stream, you are effectively mapping, with an ability to show each result in the UI etc

const aggregate = Aggregate(Plus(Stream([1,2,3]), 1))
aggregate.output() // [2,3,4]

Caching

If we have a costly deterministic block, block can store its input-output touple(s) and next time it's called with the same input, it doesn't need to run the costly computation.

we've implemented multiple caching mechanisms with plugable comparison functions for inputs and outputs. check cache.ls

  • SimpleCache just caches the last io combination

  • DeepCache stores an (infinite) list of io touples

  • LRUCache discards least used cache elements, keeping a default depth of 20

Given we pass around complex types, we also have plugable comparators, if for example identity of inputs changes but not the value.

In this context we had problems with image types ee uses, images passed are some sort of canvas dom elements, and are mutating so caching doesn't work with images atm

export type Comparator = (arg1: Array<any>, arg2: Array<any>) => boolean;

Plugable libraries

next steps are to have remote block execution and plugable block libraries

Developing

Npmjs org access

first of all, you should create a npmjs user and we should add you to our creativeai organization

Publishing new versions of this package

if you'd like your commit to be packaged as a new @creativeai/tspipelies package, you should bump the version

yarn bump

this is a shortcut for

git pull && yarn version && git push origin master && git push --tags

publishing a commit with a new git tag (added by yarn version command) will trigger a travis build

which will publish the package @ npm

then, make sure to run the upgrade wherever you use tspipelines

yarn upgrade @creativeai/tspipelines

if developing tspipelines in parallel with some project that uses it, you can ofc just run a

yarn link
yarn watch

in tspipelines folder

and run

yarn link "@creativeai/tspipelines"

in your projects folder to symlink it


Markdown generated from readme_dev.md by RunMD Logo

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •