From b159bb5fdce2f4c76b7014982ea3882ddd49a990 Mon Sep 17 00:00:00 2001 From: walkhan <493107928@qq.com> Date: Fri, 11 Feb 2022 18:27:52 +0800 Subject: [PATCH] =?UTF-8?q?=E6=B7=BB=E5=8A=A0=E8=8B=B1=E6=96=87=E5=AF=BC?= =?UTF-8?q?=E8=88=AA=E6=A0=8F=E5=8F=8A=E4=BE=A7=E8=BE=B9=E6=A0=8F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/en-US/_coverpage.md | 6 ++-- docs/en-US/_navbar.md | 38 +++++--------------- docs/en-US/_sidebar.md | 60 ++++++++++++++++++++++++++++++++ docs/en-US/introduce.md | 15 ++++++++ docs/index.html | 2 +- docs/zh-CN/_navbar.md | 7 +--- docs/zh-CN/quick_start/deploy.md | 9 ++++- 7 files changed, 97 insertions(+), 40 deletions(-) create mode 100644 docs/en-US/_sidebar.md create mode 100644 docs/en-US/introduce.md diff --git a/docs/en-US/_coverpage.md b/docs/en-US/_coverpage.md index 424af59b90..a14988322e 100644 --- a/docs/en-US/_coverpage.md +++ b/docs/en-US/_coverpage.md @@ -1,6 +1,6 @@ -![logo](../_media/dlink.svg) +![logo](../_media/dinky.svg) -# Dlink 0.4 +# Dlink 0.5 > Dlink was born for Apache Flink, making Flink SQL smoother. @@ -11,4 +11,4 @@ [GitHub](https://github.com/DataLinkDC/dlink) [Gitee](https://gitee.com/DataLinkDC/dlink) -[Get Started](/en-US/guide/quickstart) \ No newline at end of file +[Get Started](/en-US/introduce) \ No newline at end of file diff --git a/docs/en-US/_navbar.md b/docs/en-US/_navbar.md index 3dda856525..cc725ea1e4 100644 --- a/docs/en-US/_navbar.md +++ b/docs/en-US/_navbar.md @@ -1,34 +1,14 @@ -* [Home](/en-US/) +* [Home](/en-US/introduce.md) -* Guide - * [Quick Start](/en-US/guide/quickstart.md) - * [Roadmap](/en-US/guide/roadmap.md) - * [Deploy](/en-US/guide/deploy.md) - * [Functions](/en-US/guide/functions.md) - * [Debug](/en-US/guide/debug.md) +* Documents + * [master](/en-US/introduce.md) + +* [Downloads](/zh-CN/quick_start/download.md) -* Extend - * [Completion](/en-US/extend/completion.md) - * [UDF](/en-US/extend/udf.md) - * [Connector](/en-US/extend/connector.md) - * [Datasource](/en-US/extend/datasource.md) - * [Flink version](/en-US/extend/flinkversion.md) - * [Flink-CDC](/en-US/extend/flinkcdc.md) - * [DolphinScheduler](/en-US/extend/dolphinscheduler.md) - * [DataSphereStudio](/en-US/extend/dataspherestudio.md) - * [Hive](/en-US/extend/hive.md) - * [Doris](/en-US/extend/doris.md) - * [Clickhouse](/en-US/extend/clickhouse.md) - * [Hudi](/en-US/extend/hudi.md) - * [Iceberg](/en-US/extend/hudi.md) - -* Share - * [Dlink Yarn](/en-US/share/yarnsubmit.md) - * [Dlink AGGTABLE](/en-US/share/aggtable.md) - * [Dlink Principle](/en-US/share/principle.md) - -* API - * [OpenAPI](/en-US/api/openapi.md) +* Developers + * [开发调试](/zh-CN/developer-guide/debug.md) + +* User Case * Language * [中文](/) diff --git a/docs/en-US/_sidebar.md b/docs/en-US/_sidebar.md new file mode 100644 index 0000000000..df13487b40 --- /dev/null +++ b/docs/en-US/_sidebar.md @@ -0,0 +1,60 @@ + + + +- [About Dinky](/zh-CN/introduce.md) +- Concept and architecture + - [System architecture](/zh-CN/architecture.md) + - [Basic concept](/zh-CN/concept.md) +- [Features](/zh-CN/feature.md) +- Get_starting + - [Compile](/zh-CN/quick_start/build.md) + - [Deploy](/zh-CN/quick_start/deploy.md) +- Basic User Guide +- Reference Manual + - FlinkSQL Studio + - Job and directory creation + - Job development + - Job configuration + - Job Management + - Session management + - Registration Centre + - Cluster instance + - Cluster configuration + - JAR management + - DataSource management + - Document Management + - System setup + - User Management + - Flink Settings + - Operation Center + - Life Cycle Management + - Job Monitoring + - api +- Best Practices + - [Yarn Submission Practice Guide](/zh-CN/practice/yarnsubmit.md) + - [Detailed explanation of the core concept and realization principle of Dlink](/zh-CN/practice/principle.md) + - [Practice of AGGTABLE table value aggregation](/zh-CN/practice/aggtable.md) +- Extend + - Integration + - [Flink-CDC Integration](/zh-CN/extend/flinkcdc.md) + - [Flink-CDC-Kafka multi-source merge](/zh-CN/extend/Flink_CDC_kafka_Multi_source_merger.md) + - [hive Integration](/zh-CN/extend/hive.md) + - [clickhouse Integration](/zh-CN/extend/clickhouse.md) + - [Doris Integration](/zh-CN/extend/doris.md) + - [Hudi Integration](/zh-CN/extend/hudi.md) + - [Iceberg Integration](/zh-CN/extend/iceberg.md) + - [Flink UDF Integration](/zh-CN/extend/udf.md) + - [DolphinScheduler Integration](/zh-CN/extend/dolphinscheduler.md) + - [DataSphereStudio Integration](/zh-CN/extend/dataspherestudio.md) + - Others + - [Extended Flink version](/zh-CN/extend/flinkversion.md) + - [Extension connector](/zh-CN/extend/connector.md) + - [Extended DataSource](/zh-CN/extend/datasource.md) + - [FlinkSQL editor autocompletion function](/zh-CN/extend/completion.md) +- Developer's Guide +- FAQ +- [Thanks](/zh-CN/others/thanks.md) +- [Communication and contribution](/zh-CN/others/comminicate.md) +- [Roadmap](/zh-CN/roadmap.md) + + \ No newline at end of file diff --git a/docs/en-US/introduce.md b/docs/en-US/introduce.md new file mode 100644 index 0000000000..cebef982b1 --- /dev/null +++ b/docs/en-US/introduce.md @@ -0,0 +1,15 @@ +## Dinky Introduce +Real Time is the future, Dinky was created for Apache Flink, to keep Flink SQL on its toes, and to build a real time computing platform. + +Dinky is built on Apache Flink, enhances Flink's application and experience, and explores streaming data warehouses. That is standing on the shoulders of giants innovation and practice, Dinky in the future batch flow of the development trend under unlimited potential. + +Finally, Dinky owes its development to the guidance and work of other excellent open source projects such as Apache Flink. + +## The origin of Dinky +Dinky (something Dlink) : + +1. Dinky" small and delicate", the most intuitive of its characteristics: lightweight but with complex big data development capabilities. + +2. "Data integration is not difficult", meaning "easy to build a platform and application of batch flow". + +3. The transition from Dlink to Dinky was smooth and made clear the objectives of the open source project, always guiding the participants to "always remember what you started with". \ No newline at end of file diff --git a/docs/index.html b/docs/index.html index 70758ebcc9..a3dbabe227 100644 --- a/docs/index.html +++ b/docs/index.html @@ -33,7 +33,7 @@ search: { maxAge: 86400000, // 过期时间,单位毫秒,默认一天 paths: 'auto', - placeholder: '搜索', + placeholder: 'search', noData: '没有记录!' }, // 分页导航插件 diff --git a/docs/zh-CN/_navbar.md b/docs/zh-CN/_navbar.md index d4a461c3b2..be7e682da7 100644 --- a/docs/zh-CN/_navbar.md +++ b/docs/zh-CN/_navbar.md @@ -1,20 +1,15 @@ * [首页](/zh-CN/introduce.md) * 文档 - * [master]() - * [dev](/zh-CN/introduce.md) - * [0.5.1]() - * [0.5.0]() + * [master](/zh-CN/introduce.md) * [下载](/zh-CN/quick_start/download.md) - * 开发者 * [开发调试](/zh-CN/developer-guide/debug.md) * 用户案例 - * 语言 * [中文](/zh-CN/) * [En](/en-US/) diff --git a/docs/zh-CN/quick_start/deploy.md b/docs/zh-CN/quick_start/deploy.md index 30f615062e..748dbbe103 100644 --- a/docs/zh-CN/quick_start/deploy.md +++ b/docs/zh-CN/quick_start/deploy.md @@ -142,8 +142,10 @@ $nginx -s reload ``` ### 加载依赖 -Dinky具备自己的 Flink 环境,该 Flink 环境的实现需要用户自己在Dinky 根目录下创建 plugins 文件夹并上传相关的 Flink 依赖,如 flink-dist, flink-table 等,具体请阅 Readme(后续的扩展依赖也放到该目录下)。当然也可在启动文件中指定 FLINK_HOME,但不建议这样做。 +Dinky具备自己的 Flink 环境,该 Flink 环境的实现需要用户自己在Dinky 根目录下创建 plugins 文件夹并上传相关的 Flink 依赖,如 flink-dist, flink-table 等,具体请阅 Readme(后续的扩展依赖也放到该目录下)。当然也可在启动文件中指定 FLINK_HOME,但不建议这样做。 + Dinky当前版本的yarn的perjob与application执行模式依赖Flink-shade-hadoop去启动,需要额外添加Flink-shade-hadoop 包。 + ``` #创建目录 cd /opt/dlink/ @@ -154,10 +156,15 @@ mkdir plugins https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-3-uber?repo=cloudera-repos ``` 解压后结构如上所示,修改配置文件内容。lib 文件夹下存放 dlink 自身的扩展文件,plugins 文件夹下存放 flink 及 hadoop 的官方扩展文件( 如果plugins下引入了flink-shaded-hadoop-3-uber 或者其他可能冲突的jar,请手动删除内部的 javax.servlet 等冲突内容)。其中 plugins 中的所有 jar 需要根据版本号自行下载并添加,才能体验完整功能,当然也可以放自己修改的 Flink 源码编译包。extends 文件夹只作为扩展插件的备份管理,不会被 dlink 加载。 + 请检查 plugins 下是否添加了 flink 对应版本的 flink-dist,flink-table,flink-shaded-hadoop-3-uber 等如上所示的依赖!!! + 请检查 plugins 下是否添加了 flink 对应版本的 flink-dist,flink-table,flink-shaded-hadoop-3-uber 等如上所示的依赖!!! + 请检查 plugins 下是否添加了 flink 对应版本的 flink-dist,flink-table,flink-shaded-hadoop-3-uber 等如上所示的依赖!!! + 如果plugins下引入了flink-shaded-hadoop-3-uber 的jar,请手动删除内部的 javax.servlet 后既可以访问默认 8888 端口号(如127.0.0.1:8888),正常打开前端页面。 + 如果是CDH及HDP使用开源flink-shade对Dlink没有任何影响,其他用到的依赖取决于CDH或者HDP与开源版本的兼容性,需要自行根据兼容性添加依赖即可正常使用 Dlink 的所有功能。 ### 启动Dlink