1. 1 创建DataFrames
在Spark SQL中SQLContext是创建DataFrames和执行SQL的入口,在spark-1.5.2中已经内置了一个sqlContext
1.在本地创建一个文件,有三列,分别是id、name、age,用空分隔,然后上传到hdfs上
hdfs dfs -put person.txt /
文件内容如下:
1,boduo,19
2,xiaoze,20
3,longze,21
2.启动spark-shell
[bigdata@bigdata01 ~]$ /usr/local/spark-1.6.1-bin-hadoop2.6/sbin/start-all.sh
[bigdata@bigdata01 ~]$/usr/local/spark-1.6.1-bin-hadoop2.6/bin/spark-shell --master spark://bigdata01:7077 --executor-memory 1g --total-executor-cores 2
在spark shell执行下面命令,读取数据,将每一行的数据使用列分隔符分割
val lineRDD =sc.textFile(hdfs://bigdata01:9000/person.txt).map(_.split(,))
3.定义case class(相当于表的schema)
case classPerson(id:Int, name:String, age:Int)
4.将RDD和case class关联
val personRDD =lineRDD.map(x = Person(x(0).toInt, x(1), x(2).toInt))
5.将RDD转换成DataFrame
val personDF =personRDD.toDF
6.对DataFrame进行处理
personDF.show
1.2 DataFrame常用操作
//查看DataFrame中的内容
personDF.show
//查看DataFrame部分列中的内容
personDF.select(personDF.col(name)).show
personDF.select(col(name),col(age)).show
personDF.select(name).show
//打印DataFrame的Schema信息
personDF.printSchema
1.3 SQL风语法
如果想使用SQL风的语法,需要将DataFrame注册成表
personDF.registerTempTable(t_person)
//查询年龄最大的前两名
sqlContext.sql(select* from t_person order by age desc limit 2).show
//图片是教程的
//显示表的Schema信息
sqlContext.sql(desct_person).show
2. 以编程方式执行Spark SQL查询
前面我们学习了如何在Spark Shell中使用SQL完成查询,现在我们来实现在自定义的程序中编写Spark SQL查询程序。首先在maven项目的pom.xml中添加Spark SQL的依赖
dependency
groupIdorg.apache.spark/groupId
artifactIdspark-sql_2.10/artifactId
version1.5.2/version
/dependency
2.1 通过反射推断Schemapackage cn.itcast.spark.day4
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{DataFrame, SQLContext}
/**
* Created by root on 2016/5/19.
*/
object SQLDemo {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName(SQLDemo)//.setMaster(local[2])
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
//设置用户访问权限
System.setProperty(user.name,bigdata)
// val personRdd = sc.textFile(hdfs://192.168.33.71:9000/person.txt).map(line ={
val personRdd = sc.textFile(args(0)).map(line ={
val fields = line.split(,)
Person(fields(0).toLong, fields(1), fields(2).toInt)
})
import sqlContext.implicits._
val personDf = personRdd.toDF
personDf.registerTempTable(person)
// sqlContext.sql(select * from person where age = 20 order by age desc limit 2).show()
val df: DataFrame = sqlContext.sql(select * from person where age = 20 order by age desc limit 2)
df.write.json(args(1))
sc.stop()
}
}
case class Person(id: Long, name: String, age: Int)
将程序打成jar包,上传到spark集群,提交Spark任务
/usr/local/spark-1.6.1-bin-hadoop2.6/bin/spark-submit \
--class cn.itcast.spark.day4.SQLDemo \
--master spark://bigdata01:7077 \
/home/bigdata/hello-spark-1.0.jar\
hdfs://bigdata01:9000/person.txt \
hdfs://bigdata01:9000/out/
file:///home/bigdata/person.txt \
file:///home/bigdata/out
最终结果如下
[bigdata@bigdata01 hadoop-2.6.4]$ hdfs dfs -ls /out
Found 2 items
-rw-r--r-- 3 bigdata supergroup 0 2017-05-28 17:24 /out/_SUCCESS
-rw-r--r-- 3 bigdata supergroup 38 2017-05-28 17:24 /out/part-r-00000-3744ea1c-6db9-411f-a804-b51e616453f7
[bigdata@bigdata01 hadoop-2.6.4]$ hdfs dfs -cat /out/part-r-00000-3744ea1c-6db9-411f-a804-b51e616453f7
{id:3,name:canglaoshi,age:20}
除特别注明外,本站所有文章均为轻洽网络原创,转载请注明出处来自http://www.qingqia.net/458.html
暂无评论