Releases: apache/streampark
1.2.3 (Not Apache release)
Change Log
New Features
- [Feat] New added Scala 2.12 support
- [Feat] New added Flink 1.15 support
- [Feat] New added RestApi integration capability with external systems
- [Feat] New added ES 5 / 6 / 7 Datastream connector
- [Feat] New added Flink Cluster cluster management ( yarn | k8s )
- [Feat] New added Flink SQL Pulsar connector
- [Feat] New added Flink SQL Http connector
Enhance / Bug Fix
- [Bug] Fixed bugs related to kerberos certification renewal in hadoop 3 environment
- [Bug] Fix the bug that may not output logs when compiling the project
- [Bug] Fix the bug that the tm managed memory parameter cannot be set to 0
- [Bug] Fix the bug that the savepoint cannot be correctly recognized when the job is resumed when the jobId is 0**0
- [Bug] Fix the bug that the compile button does not appear after the project is modified, and the project cannot be recompiled
- [Bug] Enhance the verification of scala version when adding Flink Home
- [Bug] Refactored Datastream connector module, re-divided module and package names
- [Bug] The example program of the connector is migrated to streamx-quickstart
更新日志
新特性
- [新特性] 新增对 Scala 2.12 的支持
- [新特性] 新增对 Flink 1.15 的支持
- [新特性] 新增 RestApi 与外部系统的集成能力
- [新特性] 新增 ES 5 / 6 / 7 Datastream connector
- [新特性] 新增 Flink Cluster 集群管理 ( yarn | k8s )
- [新特性] 新增 Flink SQL Pulsar connector
- [新特性] 新增 Flink SQL Http connector
Bug修复
- [Bug] 修复 hadoop 3 环境下 kerberos 认证续期相关的 bug
- [Bug] 修复项目编译可能存在的不能输出日志的 bug
- [Bug] 修复 tm managed memory参数设置不能为0的 bug
- [Bug] 修复 jobId 为 0**0 导致任务恢复时不能正确识别 savepoint 的 bug
- [Bug] 修复项目修改后未出现编译按钮, 不能重新编译项目的bug
- [Bug] 增强在添加 Flink Home 时对 scala 版本的验证
- [Bug] 重构 Datastream connector模块,模块和包名重新划分
- [Bug] 连接器的示例程序迁移至 streamx-quickstart
1.2.2 (Not Apache release)
Change Log
New Feature
- [Feat] Support Remote deployment mode
- [Feat] Support Yarn-session deployment mode
- [Feat] Support Yarn-perjob deployment mode
- [Feat] Support Apache Doris datastream connector
- [Feat] Support Redis datastream connector
- [Feat] Support Flink Cluster cluster management
- [Feat] Support setting the remote maven repo address to download dependencies
- [Feat] Support specifying maven build parameters when building the project
- [Feat] Integrate project and app standardized launch process
- [Feat] None require enforced install maven at the deployment machine
Enhance / Bug Fix
- [Bug] Fixed submit job bug on Yarn mode in some hadoop versions
- [Bug] Fixed Flink Sql formatting bug
- [Bug] Fixed caused by closing the packageProgram bug in some Flink versions
- [Bug] Fixed uneffective scala version checking bug when adding Flink Sql dependencies
- [Bug] Add the timeout param when starting and stopping task
- [Bug] Fixed switch Flink deployment mode bug of page confusion
更新日志
新特性
- [新特性] 新增 Remote 部署模式
- [新特性] 新增 Yarn-session 部署模式
- [新特性] 新增 Yarn-perjob 部署模式
- [新特性] 统一项目构建上线流程
- [新特性] 新增 Apache Doris datastream connector
- [新特性] 新增 Redis datastream connector
- [新特性] 新增 Flink Cluster集群管理
- [新特性] 内置 maven 不强制要求部署机安装 maven
- [新特性] maven 支持设置远程仓库地址, 加速依赖下载
- [新特性] 项目构建时支持指定 maven 构建参数
Bug修复
- [Bug] on Yarn 模式下在 hadoop 某些版本存在的任务提交失败的 bug
- [Bug] Flink Sql 添加依赖时检查到不匹配 scala 版本未能阻止添加的bug
- [Bug] Flink Sql 格式化存在的 bug
- [Bug] 在 Flink 某些版本下关闭 packageProgram导致的 bug
- [Bug] 任务编辑和添加切换Flink 部署模式导致页面错乱的 bug
- [Bug] 任务在启动和停止时加入超时, 时间内未成功状态设置为失败
1.2.1 (Not Apache release)
Change Log
New Feature
- [Feat] Supports uploading jar type job #237
- [Feat] Automatic Hadoop integration on Flink Kubernetes mode. #436
- [Feat] Flink task build / run process separation. #437
- [Feat] Support update project info #650
Enhance / Bug Fix
- [Enhancement] New Official website
- [Enhancement] add code style framework #480
- [Enhancement] project build optimization, better support for front-end and back-end packaging #533
- [Enhancement] add scala version checking when download flink refer dependencies #551
- [Enhancement] Added yarn queue options #596
- [Bug] no response when commit build requst on the project page #458
- [Bug] hadoop-user-name not work #449
- [Bug] get flink version bug #447
- [Bug] wrong jar file chosen when building #473
- [Bug] Unable to add role #467
- [Bug] fixed flink job run statue #536
- [Bug] fixed soft link of path bug #519
- [Bug] fixed job the endTime value #516
- [Bug] fixed ddl sql bug. #487
- [Bug] fixed some legacy bugs for LfsOperator #475
- [Bug] fixed specified key too long #465
- [Bug] There is a problem with the developer role permission assignment of StreamX #583
- [Bug] editor follows the front-end theme to switch themes enhancement #620
- [Bug] The parameter jvm-metaspace.size setting is invalid #562
更新日志
新特性
- [新特性] 本地上传 Jar 类型任务支持 #237
- [新特性] Flink K8s 自动集成 Hadoop 构建 #436
- [新特性] Flink 任务构建 / 运行分离 #437
- [新特性] 项目支持修改 #650
改进 / Bug修复
- [改进] 全新官网上线,文档重新归类
- [改进] 增加 Checkstyle 进一步规范编码格式 #480
- [改进] 优化打包,前后端可以 混合|分离 打包 #533
- [改进] FlinkSql 任务自动检测pom依赖里的scala版本 #551
- [改进] 新增常用参数 yarn queue 设置 #596
- [Bug] 修复构建项目时前端看不到实时日志的bug #458
- [Bug] 修复 hadoop-user-name 不生效的bug #449
- [Bug] 修复 Flink version 获取可能失败的bug #447
- [Bug] 修复 Custom-Code 模式下选择 FatJar 存在的bug #473
- [Bug] 修复不能添加系统角色的bug #467
- [Bug] 修复 Flink 任务状态重启后不准确的bug #536
- [Bug] 修复添加 Flink home 时软链接导致的bug #519
- [Bug] 修复任务运行的结束时间错误的bug #516
- [Bug] 修复 ddl sql 中的bug #487
- [Bug] 修复 LfsOperator 中相关文件操作的bug#475
- [Bug] 修复 ddl sql 中主键太长导致报错的bug #465
- [Bug] 修复新增用户授权后重新登录空白页面的bug #583
- [Bug] 修复 Editor 不能跟随系统主题同步切换的bug #620
- [Bug] 修复任务参数 jvm-metaspace.size 单位错误的bug #562
1.2.0 (Not Apache release)
Change Log
New Feature
- Support for Flink on Kubernetes runtime including Application/Session mode.
- Support for automatic building Flink-K8s Application image from Flink SQL.
- StreamX instance no longer forces a dependency on the Hadoop environment.
- Flink-1.14 Support.
Enhance / Bug Fix
- HADOOP_USER_NAME can be specified in Flink on YARN mode.
- Optimize the authentication renewal logic of kerberos.
- Fix isolation-related bugs for multiple versions of Flink.
- Fix create view syntax bugs of Flink.
- Optimize the sql parse tool.
- Sql dialect bug fixed.
- new startup script. new add restart|status option.
更新日志
新特性
- 支持 Flink on Kubernetes 任务发布(包含 Application/Session Mode);
- 支持从 Flink SQL 自动构建 Flink-K8s Application 镜像;
- 与Hadoop解耦,不再强制依赖 Hadoop;
- 新增 Flink-1.14 支持(Flink-1.12,1.13,1.14);
优化 / Bug 修复
- Flink on YARN 环境下可指定 HADOOP_USER_NAME;
- 优化 kerberos 的认证续期逻辑;
- 修复 Flink 多版本的隔离相关 Bug;
- 修复Flink create view 等语法错误;
- 优化Flink Sql解析;
- 修复Flink Sql方言设置bug;
- 全新的启动脚本,新增 restart|status等操作.
...
1.1.1 (Not Apache release)
更新日志
1. kerberos 自动续期bug修复
2. 参数配置优先级相关bug修复(flink-conf.yaml中参数优先级比页面任务级别优先级大)
3. 标准apache flink任务在编辑时mainClass不回显的bug修复
4. 邮件发送参数设置相关bug修复
5. parallelism和slot参数设置不生效bug修复
6. 项目在下载maven 依赖时发生错误导致任务名称全被修改的bug修复
7. 用户登录返回前端的用户登录信息带有"盐",优化修复 (issue/240)
8. 修复启动脚本中可能存在的找不到jdk环境的bug (issue/238)
9. 新增消息推送,构建失败,任务失败消息推送到前端
1.1.0 (Not Apache release)
v1.1.0 release change log:
新增功能:
1. hadoop2/hadoop3 环境的兼容支持
2. Kerberos认证支持
3. 优化读取hadoop配置文件的策略,使本地开发调试更方便
4. jdk8以上版本的支持和兼容
5. 针对checkpoint连续失败的处理(按照失败率邮件告警或重启)
6. flink 1.13新增的sql语法支持
7. 无需补全访问路径自动跳转到index.html(访问地址栏不用在手动输入/index.html了)
8. workspace (hdfs) 根路径可配置化
9. startlog 中点击任务applicationId即可跳转到yarn中查看详情
bug修复:
1. 简化streamx-console最终打完的包结构
2. create view,set,reset等语法的bug修复
3. 文件上传bug修复
4. 发送邮件模板优化及bug修复
5. 前端关于任务修改编辑表单验证相关的bug修复
6. program arguments字段类型修改为text,修复参数过长导致的bug
7. 优化FLINK_HOME未设置,启动异常但任务状态一直为启动中的bug
8. maven 依赖优化,依赖版本统一管理
9. flink sql任务中pom依赖下载失败,任务不能做任何操作的bug修复
10. 示例任务Flink SQL Demo执行失败的bug修复
11. jdk1.8+环境项目启动失败,不能动态加载资源到classpath的bug修复
12. 多个insert into插入优化(issues 186)
13. 修改任务时"更多参数"修改导致的空指针异常修复 (issues 202)
14. 修复解析flink-conf.yaml文件可能存在的bug
15. 修复checkpoint返回路径前缀可能存在的bug
1.0.0 (Not Apache release)
[Newfeature]: New support for flink1.13, which can seamlessly switch the version of Flink (flink1.11, flink1.12, flink1.13, minimum 1.11)
[Newfeature]: new logo
[Bugfix]: the project clone to the local possible bug fixed
[Bugfix]: get yarn web address compatible with HTTPS
[Bugfix]: all known bugs