Skip to content

Commit

Permalink
[logical-types] update working branch (#12812)
Browse files Browse the repository at this point in the history
* Add support for external tables with qualified names (#12645)

* Make  support schemas

* Set default name to table

* Remove print statements and stale comment

* Add tests for create table

* Fix typo

* Update datafusion/sql/src/statement.rs

Co-authored-by: Jonah Gao <jonahgao@msn.com>

* convert create_external_table to objectname

* Add sqllogic tests

* Fix failing tests

---------

Co-authored-by: Jonah Gao <jonahgao@msn.com>

* Fix Regex signature types (#12690)

* Fix Regex signature types

* Uncomment the shared tests in string_query.slt.part and removed tests copies everywhere else

* Test `LIKE` and `MATCH` with flags; Remove new tests from regexp.slt

* Refactor `ByteGroupValueBuilder` to use `MaybeNullBufferBuilder` (#12681)

* Fix malformed hex string literal in docs (#12708)

* Simplify match patterns in coercion rules (#12711)

Remove conditions where unnecessary.
Refactor to improve readability.

* Remove aggregate functions dependency on frontend (#12715)

* Remove aggregate functions dependency on frontend

DataFusion is a SQL query engine and also a reusable library for
building query engines. The core functionality should not depend on
frontend related functionalities like `sqlparser` or `datafusion-sql`.

* Remove duplicate license header

* Minor: Remove clone in `transform_to_states` (#12707)

* rm clone

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fmt

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

---------

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* Refactor tests for union sorting properties, add tests for unions and constants (#12702)

* Refactor tests for union sorting properties

* update doc test

* Undo import reordering

* remove unecessary static lifetimes

* Fix: support Qualified Wildcard in count aggregate function (#12673)

* Reduce code duplication in `PrimitiveGroupValueBuilder` with const generics (#12703)

* Reduce code duplication in `PrimitiveGroupValueBuilder` with const generics

* Fix docs

* Disallow duplicated qualified field names (#12608)

* Disallow duplicated qualified field names

* Fix tests

* Optimize base64/hex decoding by pre-allocating output buffers (~2x faster) (#12675)

* add bench

* replace macro with generic function

* remove duplicated code

* optimize base64/hex decode

* Allow DynamicFileCatalog support to query partitioned file (#12683)

* support to query partitioned table for dynamic file catalog

* cargo clippy

* split partitions inferring to another function

* Support `LIMIT` Push-down logical plan optimization for `Extension` nodes (#12685)

* Update trait `UserDefinedLogicalNodeCore`

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Update corresponding interface

Signed-off-by: Austin Liu <austin362667@gmail.com>

Add rewrite rule for `push-down-limit` for `Extension`

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add rewrite rule for `push-down-limit` for `Extension` and tests

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Update corresponding interface

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Reorganize to match guard

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Clena up

Signed-off-by: Austin Liu <austin362667@gmail.com>

Clean up

Signed-off-by: Austin Liu <austin362667@gmail.com>

---------

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Fix AvroReader: Add union resolving for nested struct arrays (#12686)

* Add union resolving for nested struct arrays

* Add test

* Change test

* Reproduce index error

* fmt

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Adds macros for creating `WindowUDF` and `WindowFunction` expression (#12693)

* Adds macro for udwf singleton

* Adds a doc comment parameter to macro

* Add doc comment for `create_udwf` macro

* Uses default constructor

* Update `Cargo.lock` in `datafusion-cli`

* Fixes: expand `$FN_NAME` in doc strings

* Adds example for macro usage

* Renames macro

* Improve doc comments

* Rename udwf macro

* Minor: doc copy edits

* Adds macro for creating fluent-style expression API

* Adds support for 1 or more parameters in expression function

* Rewrite doc comments

* Rename parameters

* Minor: formatting

* Adds doc comment for `create_udwf_expr` macro

* Improve example docs

* Hides extraneous code in doc comments

* Add a one-line readme

* Adds doc test assertions + minor formatting fixes

* Adds common macro for defining user-defined window functions

* Adds doc comment for `define_udwf_and_expr`

* Defines `RowNumber` using common macro

* Add usage example for common macro

* Adds usage for custom constructor

* Add examples for remaining patterns

* Improve doc comments for usage examples

* Rewrite inner line docs

* Rewrite `create_udwf_expr!` doc comments

* Minor doc improvements

* Fix doc test and usage example

* Add inline comments for macro patterns

* Minor: change doc comment in example

* Support unparsing plans with both Aggregation and Window functions (#12705)

* Support unparsing plans with both Aggregation and Window functions (#35)

* Fix unparsing for aggregation grouping sets

* Add test for grouping set unparsing

* Update datafusion/sql/src/unparser/utils.rs

Co-authored-by: Jax Liu <liugs963@gmail.com>

* Update datafusion/sql/src/unparser/utils.rs

Co-authored-by: Jax Liu <liugs963@gmail.com>

* Update

* More tests

---------

Co-authored-by: Jax Liu <liugs963@gmail.com>

* Fix strpos invocation with dictionary and null (#12712)

In 1b3608d `strpos` signature was
modified to indicate it supports dictionary as input argument, but the
invoke method doesn't support them.

* docs: Update DataFusion introduction to clarify that DataFusion does provide an "out of the box" query engine (#12666)

* Update DataFusion introduction to show that DataFusion offers packaged versions for end users

* change order

* Update README.md

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* refine wording and update user guide for consistency

* prettier

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Framework for generating function docs from embedded code documentation (#12668)

* Initial work on #12432 to allow for generation of udf docs from embedded documentation in the code

* Add missing license header.

* Fixed examples.

* Fixing a really weird RustRover/wsl ... something. No clue what happened there.

* permission change

* Cargo fmt update.

* Refactored Documentation to allow it to be used in a const.

* Add documentation for syntax_example

* Refactoring Documentation based on PR feedback.

* Cargo fmt update.

* Doc update

* Fixed copy/paste error.

* Minor text updates.

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Add IMDB(JOB) Benchmark [2/N] (imdb queries) (#12529)

* imdb dataset

* cargo fmt

* Add 113 queries for IMDB(JOB)

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add `get_query_sql` from `query_id` string

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Fix CSV reader & Remove Parquet partition

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add benchmark IMDB runner

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add `run_imdb` script

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add checker for imdb option

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add SLT for IMDB

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Fix `get_query_sql()` for CI roundtrip test

Signed-off-by: Austin Liu <austin362667@gmail.com>

Fix `get_query_sql()` for CI roundtrip test

Signed-off-by: Austin Liu <austin362667@gmail.com>

Fix `get_query_sql()` for CI roundtrip test

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Clean up

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add missing license

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add IMDB(JOB) queries `2b` to `5c`

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Add `INCLUDE_IMDB` in CI verify-benchmark-results

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Prepare IMDB dataset

Signed-off-by: Austin Liu <austin362667@gmail.com>

Prepare IMDB dataset

Signed-off-by: Austin Liu <austin362667@gmail.com>

* use uint as id type

* format

* Seperate `tpch` and `imdb` benchmarking CI jobs

Signed-off-by: Austin Liu <austin362667@gmail.com>

Fix path

Signed-off-by: Austin Liu <austin362667@gmail.com>

Fix path

Signed-off-by: Austin Liu <austin362667@gmail.com>

Remove `tpch` in `imdb` benchmark

Signed-off-by: Austin Liu <austin362667@gmail.com>

* Remove IMDB(JOB) slt in CI

Signed-off-by: Austin Liu <austin362667@gmail.com>

Remove IMDB(JOB) slt in CI

Signed-off-by: Austin Liu <austin362667@gmail.com>

---------

Signed-off-by: Austin Liu <austin362667@gmail.com>
Co-authored-by: DouPache <douenergy@gmail.com>

* Minor: avoid clone while calculating union equivalence properties (#12722)

* Minor: avoid clone while calculating union equivalence properties

* Update datafusion/physical-expr/src/equivalence/properties.rs

* fmt

* Simplify streaming_merge function parameters (#12719)

* simplify streaming_merge function parameters

* revert test change

* change StreamingMergeConfig into builder pattern

* Fix links on docs index page (#12750)

* Provide field and schema metadata missing on cross joins, and union with null fields. (#12729)

* test: reproducer for missing schema metadata on cross join

* fix: pass thru schema metadata on cross join

* fix: preserve metadata when transforming to view types

* test: reproducer for missing field metadata in left hand NULL field of union

* fix: preserve field metadata from right side of union

* chore: safe indexing

* Minor: Update string tests for strpos (#12739)

* Apply `type_union_resolution` to array and values (#12753)

* cleanup make array coercion rule

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* change to type union resolution

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* change value too

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fix tpyo

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

---------

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* Add `DocumentationBuilder::with_standard_argument` to reduce copy/paste (#12747)

* Add `DocumentationBuilder::with_standard_expression` to reduce copy/paste

* fix doc

* fix standard argument

* Update docs

* Improve documentation to explain what is different

* fix `equal_to` in `PrimitiveGroupValueBuilder` (#12758)

* fix `equal_to` in `PrimitiveGroupValueBuilder`.

* fix typo.

* add uts.

* reduce calling of `is_null`.

* Minor: doc how field name is to be set (#12757)

* Fix `equal_to` in `ByteGroupValueBuilder` (#12770)

* Fix `equal_to` in `ByteGroupValueBuilder`

* refactor null_equal_to

* Update datafusion/physical-plan/src/aggregates/group_values/group_column.rs

* Allow simplification even when nullable (#12746)

The nullable requirement seem to have been added in #1401 but as far as
I can tell they are not needed for these 2 cases.

I think this can be shown using this truth table: (generated using
datafusion-cli without this patch)
```
> CREATE TABLE t (v BOOLEAN) as values (true), (false), (NULL);
> select t.v, t2.v, t.v AND (t.v OR t2.v), t.v OR (t.v AND t2.v) from t cross join t as t2;
+-------+-------+---------------------+---------------------+
| v     | v     | t.v AND t.v OR t2.v | t.v OR t.v AND t2.v |
+-------+-------+---------------------+---------------------+
| true  | true  | true                | true                |
| true  | false | true                | true                |
| true  |       | true                | true                |
| false | true  | false               | false               |
| false | false | false               | false               |
| false |       | false               | false               |
|       | true  |                     |                     |
|       | false |                     |                     |
|       |       |                     |                     |
+-------+-------+---------------------+---------------------+
```

And it seems Spark applies both of these and DuckDB applies only the
first one.

* Fix unnest conjunction with selecting wildcard expression (#12760)

* fix unnest statement with wildcard expression

* add commnets

* Improve `round` scalar function unparsing for Postgres (#12744)

* Postgres: enforce required `NUMERIC` type for `round` scalar function (#34)

Includes initial support for dialects to override scalar functions unparsing

* Document scalar_function_to_sql_overrides fn

* Fix stack overflow calculating projected orderings (#12759)

* Fix stack overflow calculating projected orderings

* fix docs

* Port / Add Documentation for `VarianceSample` and `VariancePopulation` (#12742)

* Upgrade arrow/parquet to `53.1.0` / fix clippy (#12724)

* Update to arrow/parquet 53.1.0

* Update some API

* update for changed file sizes

* Use non deprecated APIs

* Use ParquetMetadataReader from @etseidl

* remove upstreamed implementation

* Update CSV schema

* Use upstream is_null and is_not_null kernels

* feat: add support for Substrait ExtendedExpression (#12728)

* Add support for serializing and deserializing Substrait ExtendedExpr message

* Address clippy reviews

* Reuse existing rename method

* Transformed::new_transformed: Fix documentation formatting (#12787)

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* fix: Correct results for grouping sets when columns contain nulls (#12571)

* Fix grouping sets behavior when data contains nulls

* PR suggestion comment

* Update new test case

* Add grouping_id to the logical plan

* Add doc comment next to INTERNAL_GROUPING_ID

* Fix unparsing of Aggregate with grouping sets

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Migrate documentation for all string functions from scalar_functions.md to code  (#12775)

* Added documentation for string and unicode functions.

* Fixed issues with aliases.

* Cargo fmt.

* Minor doc fixes.

* Update docs for var_pop/samp

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Account for constant equivalence properties in union, tests (#12562)

* Minor: clarify comment about empty dependencies (#12786)

* Introduce Signature::String and return error if  input of `strpos` is integer (#12751)

* fix sig

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fix

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fix error

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fix all signature

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fix all signature

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* change default type

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* clippy

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* fix docs

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* rm deadcode

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* cleanup

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* cleanup

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* rm test

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

---------

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>

* Minor: improve docs on MovingMin/MovingMax (#12790)

* Add slt tests (#12721)

---------

Signed-off-by: jayzhan211 <jayzhan211@gmail.com>
Signed-off-by: Austin Liu <austin362667@gmail.com>
Co-authored-by: OussamaSaoudi <45303303+OussamaSaoudi@users.noreply.github.com>
Co-authored-by: Jonah Gao <jonahgao@msn.com>
Co-authored-by: Dmitrii Blaginin <dmitrii@blaginin.me>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: Tomoaki Kawada <kawada@kmckk.co.jp>
Co-authored-by: Piotr Findeisen <piotr.findeisen@gmail.com>
Co-authored-by: Jay Zhan <jayzhan211@gmail.com>
Co-authored-by: HuSen <husen.xjtu@gmail.com>
Co-authored-by: Emil Ejbyfeldt <emil.ejbyfeldt@gmail.com>
Co-authored-by: Simon Vandel Sillesen <simon.vandel@gmail.com>
Co-authored-by: Jax Liu <liugs963@gmail.com>
Co-authored-by: Austin Liu <austin362667@gmail.com>
Co-authored-by: JonasDev1 <jswipp@googlemail.com>
Co-authored-by: jcsherin <jacob@protoship.io>
Co-authored-by: Sergei Grebnov <sergei.grebnov@gmail.com>
Co-authored-by: Andy Grove <agrove@apache.org>
Co-authored-by: Bruce Ritchie <bruce.ritchie@veeva.com>
Co-authored-by: DouPache <douenergy@gmail.com>
Co-authored-by: mertak-synnada <mertak67+synaada@gmail.com>
Co-authored-by: Bryce Mecum <petridish@gmail.com>
Co-authored-by: wiedld <wiedld@users.noreply.github.com>
Co-authored-by: kamille <caoruiqiu.crq@antgroup.com>
Co-authored-by: Weston Pace <weston.pace@gmail.com>
Co-authored-by: Val Lorentz <vlorentz@softwareheritage.org>
  • Loading branch information
1 parent 454db7e commit f475a0f
Show file tree
Hide file tree
Showing 287 changed files with 9,951 additions and 3,563 deletions.
7 changes: 6 additions & 1 deletion .github/workflows/rust.yml
Original file line number Diff line number Diff line change
Expand Up @@ -521,7 +521,7 @@ jobs:
run: taplo format --check

config-docs-check:
name: check configs.md is up-to-date
name: check configs.md and ***_functions.md is up-to-date
needs: [ linux-build-lib ]
runs-on: ubuntu-latest
container:
Expand All @@ -542,6 +542,11 @@ jobs:
# If you encounter an error, run './dev/update_config_docs.sh' and commit
./dev/update_config_docs.sh
git diff --exit-code
- name: Check if any of the ***_functions.md has been modified
run: |
# If you encounter an error, run './dev/update_function_docs.sh' and commit
./dev/update_function_docs.sh
git diff --exit-code
# Verify MSRV for the crates which are directly used by other projects:
# - datafusion
Expand Down
18 changes: 9 additions & 9 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -70,22 +70,22 @@ version = "42.0.0"
ahash = { version = "0.8", default-features = false, features = [
"runtime-rng",
] }
arrow = { version = "53.0.0", features = [
arrow = { version = "53.1.0", features = [
"prettyprint",
] }
arrow-array = { version = "53.0.0", default-features = false, features = [
arrow-array = { version = "53.1.0", default-features = false, features = [
"chrono-tz",
] }
arrow-buffer = { version = "53.0.0", default-features = false }
arrow-flight = { version = "53.0.0", features = [
arrow-buffer = { version = "53.1.0", default-features = false }
arrow-flight = { version = "53.1.0", features = [
"flight-sql-experimental",
] }
arrow-ipc = { version = "53.0.0", default-features = false, features = [
arrow-ipc = { version = "53.1.0", default-features = false, features = [
"lz4",
] }
arrow-ord = { version = "53.0.0", default-features = false }
arrow-schema = { version = "53.0.0", default-features = false }
arrow-string = { version = "53.0.0", default-features = false }
arrow-ord = { version = "53.1.0", default-features = false }
arrow-schema = { version = "53.1.0", default-features = false }
arrow-string = { version = "53.1.0", default-features = false }
async-trait = "0.1.73"
bigdecimal = "=0.4.1"
bytes = "1.4"
Expand Down Expand Up @@ -126,7 +126,7 @@ log = "^0.4"
num_cpus = "1.13.0"
object_store = { version = "0.11.0", default-features = false }
parking_lot = "0.12"
parquet = { version = "53.0.0", default-features = false, features = [
parquet = { version = "53.1.0", default-features = false, features = [
"arrow",
"async",
"object_store",
Expand Down
17 changes: 14 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,14 +42,25 @@
</a>

DataFusion is an extensible query engine written in [Rust] that
uses [Apache Arrow] as its in-memory format. DataFusion's target users are
uses [Apache Arrow] as its in-memory format.

The DataFusion libraries in this repository are used to build data-centric system software. DataFusion also provides the
following subprojects, which are packaged versions of DataFusion intended for end users.

- [DataFusion Python](https://github.com/apache/datafusion-python/) offers a Python interface for SQL and DataFrame
queries.
- [DataFusion Ray](https://github.com/apache/datafusion-ray/) provides a distributed version of DataFusion that scales
out on Ray clusters.
- [DataFusion Comet](https://github.com/apache/datafusion-comet/) is an accelerator for Apache Spark based on
DataFusion.

The target audience for the DataFusion crates in this repository are
developers building fast and feature rich database and analytic systems,
customized to particular workloads. See [use cases] for examples.

"Out of the box," DataFusion offers [SQL] and [`Dataframe`] APIs,
DataFusion offers [SQL] and [`Dataframe`] APIs,
excellent [performance], built-in support for CSV, Parquet, JSON, and Avro,
extensive customization, and a great community.
[Python Bindings] are also available.

DataFusion features a full query planner, a columnar, streaming, multi-threaded,
vectorized execution engine, and partitioned data sources. You can
Expand Down
14 changes: 14 additions & 0 deletions benchmarks/bench.sh
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,7 @@ main() {
run_clickbench_1
run_clickbench_partitioned
run_clickbench_extended
run_imdb
;;
tpch)
run_tpch "1"
Expand Down Expand Up @@ -239,6 +240,9 @@ main() {
clickbench_extended)
run_clickbench_extended
;;
imdb)
run_imdb
;;
*)
echo "Error: unknown benchmark '$BENCHMARK' for run"
usage
Expand Down Expand Up @@ -510,6 +514,16 @@ data_imdb() {
fi
}

# Runs the imdb benchmark
run_imdb() {
IMDB_DIR="${DATA_DIR}/imdb"

RESULTS_FILE="${RESULTS_DIR}/imdb.json"
echo "RESULTS_FILE: ${RESULTS_FILE}"
echo "Running imdb benchmark..."
$CARGO_COMMAND --bin imdb -- benchmark datafusion --iterations 5 --path "${IMDB_DIR}" --prefer_hash_join "${PREFER_HASH_JOIN}" --format parquet -o "${RESULTS_FILE}"
}




Expand Down
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/10a.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(chn.name) AS uncredited_voiced_character, MIN(t.title) AS russian_movie FROM char_name AS chn, cast_info AS ci, company_name AS cn, company_type AS ct, movie_companies AS mc, role_type AS rt, title AS t WHERE ci.note like '%(voice)%' and ci.note like '%(uncredited)%' AND cn.country_code = '[ru]' AND rt.role = 'actor' AND t.production_year > 2005 AND t.id = mc.movie_id AND t.id = ci.movie_id AND ci.movie_id = mc.movie_id AND chn.id = ci.person_role_id AND rt.id = ci.role_id AND cn.id = mc.company_id AND ct.id = mc.company_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/10b.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(chn.name) AS character, MIN(t.title) AS russian_mov_with_actor_producer FROM char_name AS chn, cast_info AS ci, company_name AS cn, company_type AS ct, movie_companies AS mc, role_type AS rt, title AS t WHERE ci.note like '%(producer)%' AND cn.country_code = '[ru]' AND rt.role = 'actor' AND t.production_year > 2010 AND t.id = mc.movie_id AND t.id = ci.movie_id AND ci.movie_id = mc.movie_id AND chn.id = ci.person_role_id AND rt.id = ci.role_id AND cn.id = mc.company_id AND ct.id = mc.company_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/10c.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(chn.name) AS character, MIN(t.title) AS movie_with_american_producer FROM char_name AS chn, cast_info AS ci, company_name AS cn, company_type AS ct, movie_companies AS mc, role_type AS rt, title AS t WHERE ci.note like '%(producer)%' AND cn.country_code = '[us]' AND t.production_year > 1990 AND t.id = mc.movie_id AND t.id = ci.movie_id AND ci.movie_id = mc.movie_id AND chn.id = ci.person_role_id AND rt.id = ci.role_id AND cn.id = mc.company_id AND ct.id = mc.company_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/11a.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS from_company, MIN(lt.link) AS movie_link_type, MIN(t.title) AS non_polish_sequel_movie FROM company_name AS cn, company_type AS ct, keyword AS k, link_type AS lt, movie_companies AS mc, movie_keyword AS mk, movie_link AS ml, title AS t WHERE cn.country_code !='[pl]' AND (cn.name LIKE '%Film%' OR cn.name LIKE '%Warner%') AND ct.kind ='production companies' AND k.keyword ='sequel' AND lt.link LIKE '%follow%' AND mc.note IS NULL AND t.production_year BETWEEN 1950 AND 2000 AND lt.id = ml.link_type_id AND ml.movie_id = t.id AND t.id = mk.movie_id AND mk.keyword_id = k.id AND t.id = mc.movie_id AND mc.company_type_id = ct.id AND mc.company_id = cn.id AND ml.movie_id = mk.movie_id AND ml.movie_id = mc.movie_id AND mk.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/11b.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS from_company, MIN(lt.link) AS movie_link_type, MIN(t.title) AS sequel_movie FROM company_name AS cn, company_type AS ct, keyword AS k, link_type AS lt, movie_companies AS mc, movie_keyword AS mk, movie_link AS ml, title AS t WHERE cn.country_code !='[pl]' AND (cn.name LIKE '%Film%' OR cn.name LIKE '%Warner%') AND ct.kind ='production companies' AND k.keyword ='sequel' AND lt.link LIKE '%follows%' AND mc.note IS NULL AND t.production_year = 1998 and t.title like '%Money%' AND lt.id = ml.link_type_id AND ml.movie_id = t.id AND t.id = mk.movie_id AND mk.keyword_id = k.id AND t.id = mc.movie_id AND mc.company_type_id = ct.id AND mc.company_id = cn.id AND ml.movie_id = mk.movie_id AND ml.movie_id = mc.movie_id AND mk.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/11c.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS from_company, MIN(mc.note) AS production_note, MIN(t.title) AS movie_based_on_book FROM company_name AS cn, company_type AS ct, keyword AS k, link_type AS lt, movie_companies AS mc, movie_keyword AS mk, movie_link AS ml, title AS t WHERE cn.country_code !='[pl]' and (cn.name like '20th Century Fox%' or cn.name like 'Twentieth Century Fox%') AND ct.kind != 'production companies' and ct.kind is not NULL AND k.keyword in ('sequel', 'revenge', 'based-on-novel') AND mc.note is not NULL AND t.production_year > 1950 AND lt.id = ml.link_type_id AND ml.movie_id = t.id AND t.id = mk.movie_id AND mk.keyword_id = k.id AND t.id = mc.movie_id AND mc.company_type_id = ct.id AND mc.company_id = cn.id AND ml.movie_id = mk.movie_id AND ml.movie_id = mc.movie_id AND mk.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/11d.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS from_company, MIN(mc.note) AS production_note, MIN(t.title) AS movie_based_on_book FROM company_name AS cn, company_type AS ct, keyword AS k, link_type AS lt, movie_companies AS mc, movie_keyword AS mk, movie_link AS ml, title AS t WHERE cn.country_code !='[pl]' AND ct.kind != 'production companies' and ct.kind is not NULL AND k.keyword in ('sequel', 'revenge', 'based-on-novel') AND mc.note is not NULL AND t.production_year > 1950 AND lt.id = ml.link_type_id AND ml.movie_id = t.id AND t.id = mk.movie_id AND mk.keyword_id = k.id AND t.id = mc.movie_id AND mc.company_type_id = ct.id AND mc.company_id = cn.id AND ml.movie_id = mk.movie_id AND ml.movie_id = mc.movie_id AND mk.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/12a.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS movie_company, MIN(mi_idx.info) AS rating, MIN(t.title) AS drama_horror_movie FROM company_name AS cn, company_type AS ct, info_type AS it1, info_type AS it2, movie_companies AS mc, movie_info AS mi, movie_info_idx AS mi_idx, title AS t WHERE cn.country_code = '[us]' AND ct.kind = 'production companies' AND it1.info = 'genres' AND it2.info = 'rating' AND mi.info in ('Drama', 'Horror') AND mi_idx.info > '8.0' AND t.production_year between 2005 and 2008 AND t.id = mi.movie_id AND t.id = mi_idx.movie_id AND mi.info_type_id = it1.id AND mi_idx.info_type_id = it2.id AND t.id = mc.movie_id AND ct.id = mc.company_type_id AND cn.id = mc.company_id AND mc.movie_id = mi.movie_id AND mc.movie_id = mi_idx.movie_id AND mi.movie_id = mi_idx.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/12b.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi.info) AS budget, MIN(t.title) AS unsuccsessful_movie FROM company_name AS cn, company_type AS ct, info_type AS it1, info_type AS it2, movie_companies AS mc, movie_info AS mi, movie_info_idx AS mi_idx, title AS t WHERE cn.country_code ='[us]' AND ct.kind is not NULL and (ct.kind ='production companies' or ct.kind = 'distributors') AND it1.info ='budget' AND it2.info ='bottom 10 rank' AND t.production_year >2000 AND (t.title LIKE 'Birdemic%' OR t.title LIKE '%Movie%') AND t.id = mi.movie_id AND t.id = mi_idx.movie_id AND mi.info_type_id = it1.id AND mi_idx.info_type_id = it2.id AND t.id = mc.movie_id AND ct.id = mc.company_type_id AND cn.id = mc.company_id AND mc.movie_id = mi.movie_id AND mc.movie_id = mi_idx.movie_id AND mi.movie_id = mi_idx.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/12c.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS movie_company, MIN(mi_idx.info) AS rating, MIN(t.title) AS mainstream_movie FROM company_name AS cn, company_type AS ct, info_type AS it1, info_type AS it2, movie_companies AS mc, movie_info AS mi, movie_info_idx AS mi_idx, title AS t WHERE cn.country_code = '[us]' AND ct.kind = 'production companies' AND it1.info = 'genres' AND it2.info = 'rating' AND mi.info in ('Drama', 'Horror', 'Western', 'Family') AND mi_idx.info > '7.0' AND t.production_year between 2000 and 2010 AND t.id = mi.movie_id AND t.id = mi_idx.movie_id AND mi.info_type_id = it1.id AND mi_idx.info_type_id = it2.id AND t.id = mc.movie_id AND ct.id = mc.company_type_id AND cn.id = mc.company_id AND mc.movie_id = mi.movie_id AND mc.movie_id = mi_idx.movie_id AND mi.movie_id = mi_idx.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/13a.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi.info) AS release_date, MIN(miidx.info) AS rating, MIN(t.title) AS german_movie FROM company_name AS cn, company_type AS ct, info_type AS it, info_type AS it2, kind_type AS kt, movie_companies AS mc, movie_info AS mi, movie_info_idx AS miidx, title AS t WHERE cn.country_code ='[de]' AND ct.kind ='production companies' AND it.info ='rating' AND it2.info ='release dates' AND kt.kind ='movie' AND mi.movie_id = t.id AND it2.id = mi.info_type_id AND kt.id = t.kind_id AND mc.movie_id = t.id AND cn.id = mc.company_id AND ct.id = mc.company_type_id AND miidx.movie_id = t.id AND it.id = miidx.info_type_id AND mi.movie_id = miidx.movie_id AND mi.movie_id = mc.movie_id AND miidx.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/13b.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS producing_company, MIN(miidx.info) AS rating, MIN(t.title) AS movie_about_winning FROM company_name AS cn, company_type AS ct, info_type AS it, info_type AS it2, kind_type AS kt, movie_companies AS mc, movie_info AS mi, movie_info_idx AS miidx, title AS t WHERE cn.country_code ='[us]' AND ct.kind ='production companies' AND it.info ='rating' AND it2.info ='release dates' AND kt.kind ='movie' AND t.title != '' AND (t.title LIKE '%Champion%' OR t.title LIKE '%Loser%') AND mi.movie_id = t.id AND it2.id = mi.info_type_id AND kt.id = t.kind_id AND mc.movie_id = t.id AND cn.id = mc.company_id AND ct.id = mc.company_type_id AND miidx.movie_id = t.id AND it.id = miidx.info_type_id AND mi.movie_id = miidx.movie_id AND mi.movie_id = mc.movie_id AND miidx.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/13c.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS producing_company, MIN(miidx.info) AS rating, MIN(t.title) AS movie_about_winning FROM company_name AS cn, company_type AS ct, info_type AS it, info_type AS it2, kind_type AS kt, movie_companies AS mc, movie_info AS mi, movie_info_idx AS miidx, title AS t WHERE cn.country_code ='[us]' AND ct.kind ='production companies' AND it.info ='rating' AND it2.info ='release dates' AND kt.kind ='movie' AND t.title != '' AND (t.title LIKE 'Champion%' OR t.title LIKE 'Loser%') AND mi.movie_id = t.id AND it2.id = mi.info_type_id AND kt.id = t.kind_id AND mc.movie_id = t.id AND cn.id = mc.company_id AND ct.id = mc.company_type_id AND miidx.movie_id = t.id AND it.id = miidx.info_type_id AND mi.movie_id = miidx.movie_id AND mi.movie_id = mc.movie_id AND miidx.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/13d.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(cn.name) AS producing_company, MIN(miidx.info) AS rating, MIN(t.title) AS movie FROM company_name AS cn, company_type AS ct, info_type AS it, info_type AS it2, kind_type AS kt, movie_companies AS mc, movie_info AS mi, movie_info_idx AS miidx, title AS t WHERE cn.country_code ='[us]' AND ct.kind ='production companies' AND it.info ='rating' AND it2.info ='release dates' AND kt.kind ='movie' AND mi.movie_id = t.id AND it2.id = mi.info_type_id AND kt.id = t.kind_id AND mc.movie_id = t.id AND cn.id = mc.company_id AND ct.id = mc.company_type_id AND miidx.movie_id = t.id AND it.id = miidx.info_type_id AND mi.movie_id = miidx.movie_id AND mi.movie_id = mc.movie_id AND miidx.movie_id = mc.movie_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/14a.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi_idx.info) AS rating, MIN(t.title) AS northern_dark_movie FROM info_type AS it1, info_type AS it2, keyword AS k, kind_type AS kt, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, title AS t WHERE it1.info = 'countries' AND it2.info = 'rating' AND k.keyword in ('murder', 'murder-in-title', 'blood', 'violence') AND kt.kind = 'movie' AND mi.info IN ('Sweden', 'Norway', 'Germany', 'Denmark', 'Swedish', 'Denish', 'Norwegian', 'German', 'USA', 'American') AND mi_idx.info < '8.5' AND t.production_year > 2010 AND kt.id = t.kind_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mi_idx.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mi_idx.movie_id AND mi.movie_id = mi_idx.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND it2.id = mi_idx.info_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/14b.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi_idx.info) AS rating, MIN(t.title) AS western_dark_production FROM info_type AS it1, info_type AS it2, keyword AS k, kind_type AS kt, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, title AS t WHERE it1.info = 'countries' AND it2.info = 'rating' AND k.keyword in ('murder', 'murder-in-title') AND kt.kind = 'movie' AND mi.info IN ('Sweden', 'Norway', 'Germany', 'Denmark', 'Swedish', 'Denish', 'Norwegian', 'German', 'USA', 'American') AND mi_idx.info > '6.0' AND t.production_year > 2010 and (t.title like '%murder%' or t.title like '%Murder%' or t.title like '%Mord%') AND kt.id = t.kind_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mi_idx.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mi_idx.movie_id AND mi.movie_id = mi_idx.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND it2.id = mi_idx.info_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/14c.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi_idx.info) AS rating, MIN(t.title) AS north_european_dark_production FROM info_type AS it1, info_type AS it2, keyword AS k, kind_type AS kt, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, title AS t WHERE it1.info = 'countries' AND it2.info = 'rating' AND k.keyword is not null and k.keyword in ('murder', 'murder-in-title', 'blood', 'violence') AND kt.kind in ('movie', 'episode') AND mi.info IN ('Sweden', 'Norway', 'Germany', 'Denmark', 'Swedish', 'Danish', 'Norwegian', 'German', 'USA', 'American') AND mi_idx.info < '8.5' AND t.production_year > 2005 AND kt.id = t.kind_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mi_idx.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mi_idx.movie_id AND mi.movie_id = mi_idx.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND it2.id = mi_idx.info_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/15a.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi.info) AS release_date, MIN(t.title) AS internet_movie FROM aka_title AS at, company_name AS cn, company_type AS ct, info_type AS it1, keyword AS k, movie_companies AS mc, movie_info AS mi, movie_keyword AS mk, title AS t WHERE cn.country_code = '[us]' AND it1.info = 'release dates' AND mc.note like '%(200%)%' and mc.note like '%(worldwide)%' AND mi.note like '%internet%' AND mi.info like 'USA:% 200%' AND t.production_year > 2000 AND t.id = at.movie_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mc.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mc.movie_id AND mk.movie_id = at.movie_id AND mi.movie_id = mc.movie_id AND mi.movie_id = at.movie_id AND mc.movie_id = at.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND cn.id = mc.company_id AND ct.id = mc.company_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/15b.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi.info) AS release_date, MIN(t.title) AS youtube_movie FROM aka_title AS at, company_name AS cn, company_type AS ct, info_type AS it1, keyword AS k, movie_companies AS mc, movie_info AS mi, movie_keyword AS mk, title AS t WHERE cn.country_code = '[us]' and cn.name = 'YouTube' AND it1.info = 'release dates' AND mc.note like '%(200%)%' and mc.note like '%(worldwide)%' AND mi.note like '%internet%' AND mi.info like 'USA:% 200%' AND t.production_year between 2005 and 2010 AND t.id = at.movie_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mc.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mc.movie_id AND mk.movie_id = at.movie_id AND mi.movie_id = mc.movie_id AND mi.movie_id = at.movie_id AND mc.movie_id = at.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND cn.id = mc.company_id AND ct.id = mc.company_type_id;
1 change: 1 addition & 0 deletions benchmarks/queries/imdb/15c.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
SELECT MIN(mi.info) AS release_date, MIN(t.title) AS modern_american_internet_movie FROM aka_title AS at, company_name AS cn, company_type AS ct, info_type AS it1, keyword AS k, movie_companies AS mc, movie_info AS mi, movie_keyword AS mk, title AS t WHERE cn.country_code = '[us]' AND it1.info = 'release dates' AND mi.note like '%internet%' AND mi.info is not NULL and (mi.info like 'USA:% 199%' or mi.info like 'USA:% 200%') AND t.production_year > 1990 AND t.id = at.movie_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mc.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mc.movie_id AND mk.movie_id = at.movie_id AND mi.movie_id = mc.movie_id AND mi.movie_id = at.movie_id AND mc.movie_id = at.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND cn.id = mc.company_id AND ct.id = mc.company_type_id;
Loading

0 comments on commit f475a0f

Please sign in to comment.