org.apache.hadoop.hive.contrib.serde2。RegexSerDe未找到

异常信息如下:

,,,,在.org apache引发美元美元org.apache.spark.deploy.SparkSubmit部署SparkSubmit美元$ $ runMain (SparkSubmit.scala: 569)

,,,,在org.apache.spark.deploy.SparkSubmit .doRunMain 1美元(SparkSubmit.scala: 166)

,,,,在org.apache.spark.deploy.SparkSubmit美元。submit (SparkSubmit.scala: 189)

,,,,org.apache.spark.deploy.SparkSubmit .main美元(SparkSubmit.scala: 110)

,,,,org.apache.spark.deploy.SparkSubmit.main (SparkSubmit.scala)

,,,,org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore (Table.java: 290)

,,,,org.apache.hadoop.hive.ql.metadata.Table.getDeserializer (Table.java: 281)

,,,,org.apache.hadoop.hive.ql.metadata.Table.getCols (Table.java: 631)

,,,,org.apache.hadoop.hive.ql.metadata.Table.checkValidity (Table.java: 189)

,,,,org.apache.hadoop.hive.ql.metadata.Hive.getTable (Hive.java: 1017)

,,,,org.apache.hadoop.hive.ql.metadata.Hive.getTable (Hive.java: 950)

,,,,org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation (HiveMetastoreCatalog.scala: 201)

,,,,在org.apache.spark.sql.hive.HiveContext立刻2. org美元美元apache火花sql催化剂分析美元美元OverrideCatalog超级lookupRelation美元美元(HiveContext.scala: 262)

,,,,在org.apache.spark.sql.catalyst.analysis.OverrideCatalog anonfun lookupRelation美元3.美元申请(Catalog.scala: 161)

,,,,在org.apache.spark.sql.catalyst.analysis.OverrideCatalog anonfun lookupRelation美元3.美元申请(Catalog.scala: 161)

,,,,scala.Option.getOrElse (Option.scala: 120)

,,,,org.apache.spark.sql.catalyst.analysis.OverrideCatalog class.lookupRelation美元(Catalog.scala: 161)

,,,,在另一次2.美元美元org.apache.spark.sql.hive.HiveContext lookuprelation (HiveContext.scala: 262)

,,,,在org.apache.spark.sql.catalyst.analysis.Analyzer ResolveRelations .getTable美元(Analyzer.scala: 174)

,,,,在org.apache.spark.sql.catalyst.analysis.Analyzer ResolveRelations anonfun申请美元6.美元美元applyorelse (Analyzer.scala: 186)

,,,,在org.apache.spark.sql.catalyst.analysis.Analyzer ResolveRelations anonfun申请美元6.美元美元applyorelse (Analyzer.scala: 181)

,,,,org.apache.spark.sql.catalyst.trees.TreeNode anonfun 3.美元美元申请(TreeNode.scala: 188)

,,,,org.apache.spark.sql.catalyst.trees.TreeNode anonfun 3.美元美元申请(TreeNode.scala: 188)



背景:

创建表apachelog (

,主机字符串,

,标识字符串,

,用户字符串,

,时间字符串,

,请求字符串,

,地位字符串,

,大小的字符串,

,推荐人字符串,

,代理字符串)

行格式SERDE”

SERDEPROPERTIES (

,“输入。正则表达式"=" ([^]*)([^]*)([^]*)(- | \ \ [^ \ \]* \ \])([[^ ^ \]* | \”\]* \”)(- | [0 - 9]*)(- | [0 - 9]*)(?([^ \]* | \”。* \”)([^ \]* | \”。* \”)?”

)

存储为文本文件;


因为建表时使用了,”,而不是org.apache.hadoop.hive.serde2。RegexSerDe’,导致在使用spark-sql或者spark-shell访问时,一直报上述异常,总是找不到相应的类,导入相关的Jar包仍然无法解决。


解决办法:

,,,,要启动spark-shell, spark-sql时导入- Jar xxxxx。jar将相应的jar包导入。(注意:正常情况下,大家应该都会想到将相应的jar导入。但我遇到的问题,如果jar的路径是个软连接路径的话,仍然会报上述异常,找不到相应的类,必须导入jar包的实际路径才行。可能因为火花对软路径的处理有错误,不确定哦....)

org.apache.hadoop.hive.contrib.serde2。RegexSerDe未找到