问题描述:DataWorks数据源集成报Failed to flush data to StarRocks,如下图:
问题描述:DataWorks数据源集成报Failed to flush data to StarRocks,如下图:
java.lang.RuntimeException: Failed to flush data to StarRocks, Error response: {“Status”:“Fail”,“BeginTxnTimeMs”:0,“Message”:“Failed to commit txn 43478770. Tablet [3309280] success replica num 0 is less then quorum replica num 1 while error backends 10007”,“NumberUnselectedRows...
因Kafka消费太慢,导致StarrocksFE报错,报错信息如下 ErrorReason{errCode=104,msg='be 11024 abort task with reason: fetch failed due to requested offset not available on the broker: Broker: Offset out of range'}" 1. 解决方案:重置kafka的offset...
执行StarRocks的INSERT OVERWRITE命 令时经常遇到"[1064] [42000]: create partitions failed"错误, 但重新执行命令后问题消失,那位大神可以指教一下,谢谢 发布于 2023-07-21 11:00・IP 属地广东 starrocks 写下你的评论... 打开知乎App 在「我的页」右上角打开扫一扫 ...
"plugin_name" : "StarRocks", "username" : "root" } ] } Running Command ./bin/seatunnel.sh -c data/mysql_to_sr_db_sync.conf -e cluster Error Exception Exception in thread "main" org.apache.seatunnel.core.starter.exception.CommandExecuteException: SeaTunnel job executed failed ...
导入频率太快,compaction没能及时合并导致版本数过多,默认版本数1000 解决方案: 1.增大单次导入数据量,降低频率 2.调整compaction策略,加快合并(调整完需要观察内存和io)be.conf cumulative_compaction_num_threads_per_disk = 4 base_compaction_num_threads_per_disk = 2 ...
StarRocks报错close index channel failed/too many tablet versions,问题描述:导入频率太快,compaction没能及时合并导致版本数过多,默认版本数1000解决方案:1.增大单次导入数据量,降低频率2.调整compaction策略,加快合并(调整完需要观察内存和io)be.confcumulativ
Message: Start to restart database in Cluster: kafka-lydwaa Reason: RestartStarted Status: True Type: Restarting Last Transition Time: 2024-06-20T07:57:23Z Message: Failed to process OpsRequest: kafka-lydwaa-restart-dpmmd in cluster: kafka-lydwaa, more detailed informations in status.component...
starrocks [42000][1064] hdfsOpenFile failed 二. 解决方案 StarRocks的committer在论坛回复我了,原来是打包的问题。 没想到打包名字居然是写死的,而且hive catalog居然受到spark 客户端的影响。 spark jar 打包名字需要是 spark-2x.zip (既不能是spark.zip 也不能是spark-24.zip, 需要在配置文件中写死为 spa...