flink 报错 File xxxx could only be replicated to 0 nodes instead of minReplication (=1) 2023-05-22 14:40 1阅读 0赞 1,flink on yarn的模式提交任务时,一直提交数据节点不可用,报错如下 2020-03-26 12:36:07,248 ERROR org.apache.flink.yarn.YarnResourceManager - Could not start TaskManager in container container_e28_1575970105134_0117_01_000267. org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/.flink/application_1575970105134_0117/f0f9160f-a876-4437-a715-ff362c758803-taskmanager-conf.yaml could only be replicated to 0 nodes instead of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1726) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2567) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) at org.apache.hadoop.ipc.Client.call(Client.java:1435) at org.apache.hadoop.ipc.Client.call(Client.java:1345) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy33.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) at com.sun.proxy.$Proxy34.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1838) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1638) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704) 2,通过 hadoop dfsadmin -report 命令查看 hadoop的使用情况,发现hdfs磁盘使用慢了, ![watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dwcHdwcDE_size_16_color_FFFFFF_t_70][] 3,查看hdfs-site.xml配置发现,配置了500g的预留磁盘,因为配置是实时计算集群,总共硬盘才500g,导致hdfs读取硬盘出错 <property> <name>dfs.datanode.du.reserved</name> <value>502231566336</value> </property> 4,把相关的配置更改为100g, <property> <name>dfs.datanode.du.reserved</name> <value>102231566336</value> </property> 5,在再每个节点重启相关datanode节点即可 sbin/hadoop-daemon.sh stop datanode sbin/hadoop-daemon.sh start datanode [watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dwcHdwcDE_size_16_color_FFFFFF_t_70]: /images/20230521/695edb270abb4a3a99243e256830d07d.png 文章版权声明:注明蒲公英云原创文章,转载或复制请以超链接形式并注明出处。
相关 Hadoop报错:could only be replicated to 0 nodes, instead of 1 1 发现问题 执行hadoop上传文件命令,报错could only be replicated to 0 nodes, instead of 1 2 方案1 缺乏、安全感/ 2021年09月26日 10:08/ 0 赞/ 74 阅读
相关 Kettle报错:Entry to update with following key could not be found 问题描述: 一个转换对一个表进行插入操作,第一次查询然后插入数据,但是有些字段需要特殊处理下,也就是要先插入主要的信息,然后针对这个记录根据刚才生成的id进行更新操作, 心已赠人/ 2022年05月26日 20:23/ 0 赞/ 27 阅读
相关 hadoop上传文件错误File /home/input/file1.txt._COPYING_ could only be replicated to 0 nodes instead of minR 搭建好hadoop后使用hadoop dfs -put 命令上传文件发现失败,报了以下错误: [java] view plain copy 14/08/18 ゝ一纸荒年。/ 2022年05月26日 20:37/ 0 赞/ 18 阅读
相关 Xcode can't locate file for: -xxxx报错 当你在项目的Build Phases的LinkBinary With Libraries添加了第三方库之后, Xcode会自动在项目的BuildSettings的Search 待我称王封你为后i/ 2022年05月29日 11:05/ 0 赞/ 16 阅读
相关 Failed to transfer file could not determine the type of file "ftp 使用Intellij IDEA部署项目到远程服务器时报如下错误: Uploading to (服务器ip) failed: could not list the co Dear 丶/ 2022年06月10日 16:06/ 0 赞/ 42 阅读
相关 hadoop伪分布式下 无法启动datanode的原因及could only be replicated to > 0 nodes, instead of 1的错误 目前发现一个原因是 因为datanode无法启动从而导致在hadoop上 put数据 出现 could only be replicated to > 0 nodes, ins 野性酷女/ 2022年08月14日 11:50/ 0 赞/ 24 阅读
相关 Hadoop上传文件报错:could only be replicated to 0 nodes instead of minReplication (=1). 问题 Hadoop上传文件报错详情 could only be replicated to 0 nodes instead of minReplication 蔚落/ 2022年09月12日 12:50/ 0 赞/ 25 阅读
相关 Kafka Connection to node 0 (/127.0.0.1:9092) could not be established. Broker may not be available. 前言: 安装好Kafka(服务端ip为192.1683.45),window使用Java调用kafka-clients库来远程连接Kafka服务端,进行生产者和消费者测试 朴灿烈づ我的快乐病毒、/ 2022年12月19日 08:51/ 0 赞/ 71 阅读
相关 解决nginx报错:nginx: [emerg] bind() to 0.0.0.0:xxxx failed (13: Permission denied) 报错描述: nginx: \[emerg\] bind() to 0.0.0.0:8088 failed (13: Permission denied) 通过ansible 野性酷女/ 2022年12月21日 22:53/ 0 赞/ 64 阅读
相关 flink 报错 File xxxx could only be replicated to 0 nodes instead of minReplication (=1) 1,flink on yarn的模式提交任务时,一直提交数据节点不可用,报错如下 2020-03-26 12:36:07,248 ERROR org.apache.f 朱雀/ 2023年05月22日 14:40/ 0 赞/ 2 阅读
还没有评论,来说两句吧...