"reason":"Result window is too large, from + size must be less than or equal to: [10000] but was [10009]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting." } ], "type"...
from + size must be less than or equal to: [10000] but was [10010]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."} ...
AI代码解释 GET/_search{"size":10000,"query":{"match":{"user.id":"elkbee"}},"pit":{"id":"46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==","keep_alive...
"reason": "Result window is too large, from + size must be less than or equal to: [10000] but was [10010]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting." } } ] },...
{ "from" : 100000, "size" : 50, "query" : { "term" : { "user" : "alex" } } } 注: from+size不能大于index.max_result_window的默认设置10000 回归mysql 当我们使用深度分页,此时 select * from base_product_shop_sap limit 100000,50 时就会出现慢查询,它相当于先遍历了前100000个,然后取...
但是作为一般系统的全文搜索引擎的应用场景这10000的上限显然是不合理的,福哥通过查询官方文档发现了一个专门用来做数据翻页的功能——scroll,ES官方设计了这个scroll功能可以针对查询结果进行数据翻页。 需要注意的是,如果我们的使用情况是类似某度、某歌那样的搜索引擎的话,还是使用from + size效率更高哦~~ ...
Elasticsearch Spring boot findAll结果窗口太大,from + size必须小于或等于:[10000]但was [331576]$...
使用from/size对应语句查询,出现了异常: Result window is too large, from + size must be less than or equal to [10000] but was [100050]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level...
默认大小:10000 就是from+size,from:决定要返回的文档从哪里开始,size:决定返回多少条。假如from+size很大的话,将会消耗很多的内存和时间;这个设置就是为了防止内存不够用的情况。 这个设置是索引层的,即便是使用_all设置了,看日志也是对逐个索引加这个配置,后续新加的索引,max_result_window默认还是1w。
您需要根据集群规格,合理设置单个数据节点的shard数,详细信息请参见规格容量评估和Size your shards。 您可以使用如下命令,通过max_shards_per_node参数,临时修改集群的最大分片数: PUT /_cluster/settings { "transient": { "cluster": { "max_shards_per_node":10000 } } } 重要 作为长期解决方案不建议将...