在deepinv20中安装hadoop-3.2.1
Tofloor
poster avatar
wakawakaohoh
deepin
2020-07-30 23:58
Author
已经装了好多遍了  就是无法开启namenode

/etc/sysconfig/network  为什么在deepin里没有  还是在debian中都没有

所有问题都排查了,目前来看就是主机名的问题,但这个文件我找不到没有办法查看这里的主机名,也没有办法配置静态ip

这是日志错误:
2020-07-30 15:39:55,632 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: The value of property bind.address must not be null
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:216)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
        at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:603)
        at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:558)
        at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
        at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:433)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:164)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:885)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:707)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:926)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1692)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1759)


core-site.xml:
xml version="1.0" encoding="UTF-8"?>
xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>

  <property>
      <name>fs.defaultFSname>
      <value>hdfs://localhost:9000value>
  property>

  <property>
      <name>hadoop.tmp.dirname>
      <value>/opt/data/hadoop_repovalue>
  property>
configuration>


hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replicationname>
        <value>1value>
    property>
    <property>
        <name>dfs.namenode.name.dirname>
        <value>/opt/data/hadoop_repo/dfs/namevalue>
    property>
    <property>
        <name>dfs.datanode.data.dirname>
        <value>/opt/data/hadoop_repo/dfs/datavalue>
    property>


configuration>


/etc/hosts
127.0.0.1        localhost#127.0.1.1 waka-PC127.0.0.1 waka-PC# The following lines are desirable for IPv6 capable hosts::1     localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters






Reply Favorite View the author
All Replies
avatar
Sun
deepin
2020-07-31 01:08
#1
看看文件权限是否正常.有没有默认的core 和hdfs配置文件.
不行就用Docker吧.
Reply View the author
avatar
wakawakaohoh
deepin
2020-07-31 01:32
#2
https://bbs.deepin.org/post/198059
看看文件权限是否正常.有没有默认的core 和hdfs配置文件.
不行就用Docker吧.

已经解决了   这次需要分别配置namenode和secondarynamenode的端口号  之前是不用的
Reply View the author
avatar
Sun
deepin
2020-07-31 16:35
#3
https://bbs.deepin.org/post/198059
已经解决了   这次需要分别配置namenode和secondarynamenode的端口号  之前是不用的 ...

Hadoop3 改了很多东西..还是建议你用Docker 可以隔离系统的环境,避免增加一些垃圾文件.导致后期系统臃肿
Reply View the author