大数据之CDH数仓(9) | 数仓之业务数仓搭建

约定不等于承诺〃 2022-12-11 15:11 383阅读 0赞

目录

  • 业务数据生成
    • 建表语句
    • 生成业务数据
  • 业务数据导入数仓
    • Sqoop定时导入脚本
  • ODS层
    • 创建订单表
    • 创建订单详情表
    • 创建商品表
    • 创建用户表
    • 创建商品一级分类表
    • 创建商品二级分类表
    • 创建商品三级分类表
    • 创建支付流水表
    • ODS层数据导入脚本
  • DWD层
    • 创建订单表
    • 创建订单详情表
    • 创建用户表
    • 创建支付流水表
    • 创建商品表(增加分类)
    • DWD层数据导入脚本1)
  • DWS层之用户行为宽表
    • 创建用户行为宽表
    • 用户行为数据宽表导入脚本
  • ADS层(需求:GMV成交总额)
    • 建表语句
    • 数据导入脚本
    • 数据导出脚本

业务数据生成

数据库生成脚本

  1. 链接:https://pan.baidu.com/s/1f-yAUqfte-T0Yk2mEBeAUw
  2. 提取码:cwwx

建表语句

1)通过Navicat创建数据库gmall
在这里插入图片描述
2)设置数据库编码
在这里插入图片描述

3)导入建表语句(1建表脚本)
选择->1建表脚本.sql
在这里插入图片描述

4)重复步骤3的导入方式,依次导入:2商品分类数据插入脚本、3函数脚本、4存储过程脚本。

生成业务数据

1)生成业务数据函数说明
init_data ( do_date_string VARCHAR(20) , order_incr_num INT, user_incr_num INT , sku_num INT , if_truncate BOOLEAN ):
参数一:do_date_string生成数据日期
参数二:order_incr_num订单id个数
参数三:user_incr_num用户id个数
参数四:sku_num商品sku个数
参数五:if_truncate是否删除数据
2)案例测试:
(1)需求:生成日期2019年2月10日数据、订单1000个、用户200个、商品sku300个、删除原始数据。
(2)查询生成数据结果

  1. CALL init_data('2019-02-10',1000,200,300,TRUE);

业务数据导入数仓

在这里插入图片描述

Sqoop定时导入脚本

1)在/root/bin目录下创建脚本sqoop_import.sh

  1. [root@hadoop102 bin]$ vim sqoop_import.sh

在脚本中填写如下内容

  1. #!/bin/bash
  2. export HADOOP_USER_NAME=hive
  3. db_date=$2
  4. echo $db_date
  5. db_name=gmall
  6. import_data() {
  7. sqoop import \
  8. --connect jdbc:mysql://hadoop102:3306/$db_name \
  9. --username root \
  10. --password Yy8266603@ \
  11. --target-dir /origin_data/$db_name/db/$1/$db_date \
  12. --delete-target-dir \
  13. --num-mappers 1 \
  14. --fields-terminated-by "\t" \
  15. --query "$2"' and $CONDITIONS;'
  16. }
  17. import_sku_info(){
  18. import_data "sku_info" "select id, spu_id, price, sku_name, sku_desc, weight, tm_id, category3_id, create_time from sku_info where 1=1"
  19. }
  20. import_user_info(){
  21. import_data "user_info" "select id, name, birthday, gender, email, user_level, create_time from user_info where 1=1"
  22. }
  23. import_base_category1(){
  24. import_data "base_category1" "select id, name from base_category1 where 1=1"
  25. }
  26. import_base_category2(){
  27. import_data "base_category2" "select id, name, category1_id from base_category2 where 1=1"
  28. }
  29. import_base_category3(){
  30. import_data "base_category3" "select id, name, category2_id from base_category3 where 1=1"
  31. }
  32. import_order_detail(){
  33. import_data "order_detail" "select od.id, order_id, user_id, sku_id, sku_name, order_price, sku_num, o.create_time from order_info o , order_detail od where o.id=od.order_id and DATE_FORMAT(create_time,'%Y-%m-%d')='$db_date'"
  34. }
  35. import_payment_info(){
  36. import_data "payment_info" "select id, out_trade_no, order_id, user_id, alipay_trade_no, total_amount, subject, payment_type, payment_time from payment_info where DATE_FORMAT(payment_time,'%Y-%m-%d')='$db_date'"
  37. }
  38. import_order_info(){
  39. import_data "order_info" "select id, total_amount, order_status, user_id, payment_way, out_trade_no, create_time, operate_time from order_info where (DATE_FORMAT(create_time,'%Y-%m-%d')='$db_date' or DATE_FORMAT(operate_time,'%Y-%m-%d')='$db_date')"
  40. }
  41. case $1 in
  42. "base_category1")
  43. import_base_category1
  44. ;;
  45. "base_category2")
  46. import_base_category2
  47. ;;
  48. "base_category3")
  49. import_base_category3
  50. ;;
  51. "order_info")
  52. import_order_info
  53. ;;
  54. "order_detail")
  55. import_order_detail
  56. ;;
  57. "sku_info")
  58. import_sku_info
  59. ;;
  60. "user_info")
  61. import_user_info
  62. ;;
  63. "payment_info")
  64. import_payment_info
  65. ;;
  66. "all")
  67. import_base_category1
  68. import_base_category2
  69. import_base_category3
  70. import_order_info
  71. import_order_detail
  72. import_sku_info
  73. import_user_info
  74. import_payment_info
  75. ;;
  76. esac

2)增加脚本执行权限

  1. [root@hadoop102 bin]$ chmod 777 sqoop_import.sh

3)执行脚本导入数据

  1. [root@hadoop102 bin]# sqoop_import.sh all 2019-02-10

4)修改/orgin_data/gmall/db路径的访问权限

  1. [root@hadoop102 bin]# sudo -u hdfs hadoop fs -chmod -R 777 /origin_data/gmall/db

ODS层

完全仿照业务数据库中的表字段,一模一样的创建ODS层对应表。

创建订单表

  1. drop table if exists ods_order_info;
  2. create external table ods_order_info (
  3. `id` string COMMENT '订单编号',
  4. `total_amount` decimal(10,2) COMMENT '订单金额',
  5. `order_status` string COMMENT '订单状态',
  6. `user_id` string COMMENT '用户id' ,
  7. `payment_way` string COMMENT '支付方式',
  8. `out_trade_no` string COMMENT '支付流水号',
  9. `create_time` string COMMENT '创建时间',
  10. `operate_time` string COMMENT '操作时间'
  11. ) COMMENT '订单表'
  12. PARTITIONED BY ( `dt` string)
  13. row format delimited fields terminated by '\t'
  14. location '/warehouse/gmall/ods/ods_order_info/'
  15. ;

创建订单详情表

  1. drop table if exists ods_order_detail;
  2. create external table ods_order_detail(
  3. `id` string COMMENT '订单编号',
  4. `order_id` string COMMENT '订单号',
  5. `user_id` string COMMENT '用户id' ,
  6. `sku_id` string COMMENT '商品id',
  7. `sku_name` string COMMENT '商品名称',
  8. `order_price` string COMMENT '商品价格',
  9. `sku_num` string COMMENT '商品数量',
  10. `create_time` string COMMENT '创建时间'
  11. ) COMMENT '订单明细表'
  12. PARTITIONED BY ( `dt` string)
  13. row format delimited fields terminated by '\t'
  14. location '/warehouse/gmall/ods/ods_order_detail/'
  15. ;

创建商品表

  1. drop table if exists ods_sku_info;
  2. create external table ods_sku_info(
  3. `id` string COMMENT 'skuId',
  4. `spu_id` string COMMENT 'spuid',
  5. `price` decimal(10,2) COMMENT '价格' ,
  6. `sku_name` string COMMENT '商品名称',
  7. `sku_desc` string COMMENT '商品描述',
  8. `weight` string COMMENT '重量',
  9. `tm_id` string COMMENT '品牌id',
  10. `category3_id` string COMMENT '品类id',
  11. `create_time` string COMMENT '创建时间'
  12. ) COMMENT '商品表'
  13. PARTITIONED BY ( `dt` string)
  14. row format delimited fields terminated by '\t'
  15. location '/warehouse/gmall/ods/ods_sku_info/'
  16. ;

创建用户表

  1. drop table if exists ods_user_info;
  2. create external table ods_user_info(
  3. `id` string COMMENT '用户id',
  4. `name` string COMMENT '姓名',
  5. `birthday` string COMMENT '生日' ,
  6. `gender` string COMMENT '性别',
  7. `email` string COMMENT '邮箱',
  8. `user_level` string COMMENT '用户等级',
  9. `create_time` string COMMENT '创建时间'
  10. ) COMMENT '用户信息'
  11. PARTITIONED BY ( `dt` string)
  12. row format delimited fields terminated by '\t'
  13. location '/warehouse/gmall/ods/ods_user_info/'
  14. ;

创建商品一级分类表

  1. drop table if exists ods_base_category1;
  2. create external table ods_base_category1(
  3. `id` string COMMENT 'id',
  4. `name` string COMMENT '名称'
  5. ) COMMENT '商品一级分类'
  6. PARTITIONED BY ( `dt` string)
  7. row format delimited fields terminated by '\t'
  8. location '/warehouse/gmall/ods/ods_base_category1/'
  9. ;

创建商品二级分类表

  1. drop table if exists ods_base_category2;
  2. create external table ods_base_category2(
  3. `id` string COMMENT ' id',
  4. `name` string COMMENT '名称',
  5. category1_id string COMMENT '一级品类id'
  6. ) COMMENT '商品二级分类'
  7. PARTITIONED BY ( `dt` string)
  8. row format delimited fields terminated by '\t'
  9. location '/warehouse/gmall/ods/ods_base_category2/'
  10. ;

创建商品三级分类表

  1. drop table if exists ods_base_category3;
  2. create external table ods_base_category3(
  3. `id` string COMMENT ' id',
  4. `name` string COMMENT '名称',
  5. category2_id string COMMENT '二级品类id'
  6. ) COMMENT '商品三级分类'
  7. PARTITIONED BY ( `dt` string)
  8. row format delimited fields terminated by '\t'
  9. location '/warehouse/gmall/ods/ods_base_category3/'
  10. ;

创建支付流水表

  1. drop table if exists `ods_payment_info`;
  2. create external table `ods_payment_info`(
  3. `id` bigint COMMENT '编号',
  4. `out_trade_no` string COMMENT '对外业务编号',
  5. `order_id` string COMMENT '订单编号',
  6. `user_id` string COMMENT '用户编号',
  7. `alipay_trade_no` string COMMENT '支付宝交易流水编号',
  8. `total_amount` decimal(16,2) COMMENT '支付金额',
  9. `subject` string COMMENT '交易内容',
  10. `payment_type` string COMMENT '支付类型',
  11. `payment_time` string COMMENT '支付时间'
  12. ) COMMENT '支付流水表'
  13. PARTITIONED BY ( `dt` string)
  14. row format delimited fields terminated by '\t'
  15. location '/warehouse/gmall/ods/ods_payment_info/'
  16. ;

ODS层数据导入脚本

1)在/root/bin目录下创建脚本ods_db.sh

  1. [root@hadoop102 bin]$ vim ods_db.sh

在脚本中填写如下内容

  1. #!/bin/bash
  2. APP=gmall
  3. # 如果是输入的日期按照取输入日期;如果没输入日期取当前时间的前一天
  4. if [ -n "$1" ] ;then
  5. do_date=$1
  6. else
  7. do_date=`date -d "-1 day" +%F`
  8. fi
  9. sql=" load data inpath '/origin_data/$APP/db/order_info/$do_date' OVERWRITE into table "$APP".ods_order_info partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/order_detail/$do_date' OVERWRITE into table "$APP".ods_order_detail partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/sku_info/$do_date' OVERWRITE into table "$APP".ods_sku_info partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/user_info/$do_date' OVERWRITE into table "$APP".ods_user_info partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/payment_info/$do_date' OVERWRITE into table "$APP".ods_payment_info partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/base_category1/$do_date' OVERWRITE into table "$APP".ods_base_category1 partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/base_category2/$do_date' OVERWRITE into table "$APP".ods_base_category2 partition(dt='$do_date'); load data inpath '/origin_data/$APP/db/base_category3/$do_date' OVERWRITE into table "$APP".ods_base_category3 partition(dt='$do_date'); "
  10. beeline -u "jdbc:hive2://hadoop102:10000/" -n hive -e "$sql"

2)增加脚本执行权限

  1. [root@hadoop102 bin]$ chmod +x ods_db.sh

3)采用脚本导入数据

  1. [root@hadoop102 bin]$ ods_db.sh 2019-02-10

DWD层

对ODS层数据进行判空过滤。对商品分类表进行维度退化(降维)。

创建订单表

  1. drop table if exists dwd_order_info;
  2. create external table dwd_order_info (
  3. `id` string COMMENT '',
  4. `total_amount` decimal(10,2) COMMENT '',
  5. `order_status` string COMMENT ' 1 2 3 4 5',
  6. `user_id` string COMMENT 'id' ,
  7. `payment_way` string COMMENT '',
  8. `out_trade_no` string COMMENT '',
  9. `create_time` string COMMENT '',
  10. `operate_time` string COMMENT ''
  11. )
  12. PARTITIONED BY ( `dt` string)
  13. stored as parquet
  14. location '/warehouse/gmall/dwd/dwd_order_info/'
  15. ;

创建订单详情表

  1. drop table if exists dwd_order_detail;
  2. create external table dwd_order_detail(
  3. `id` string COMMENT '',
  4. `order_id` decimal(10,2) COMMENT '',
  5. `user_id` string COMMENT 'id' ,
  6. `sku_id` string COMMENT 'id',
  7. `sku_name` string COMMENT '',
  8. `order_price` string COMMENT '',
  9. `sku_num` string COMMENT '',
  10. `create_time` string COMMENT ''
  11. )
  12. PARTITIONED BY (`dt` string)
  13. stored as parquet
  14. location '/warehouse/gmall/dwd/dwd_order_detail/'
  15. ;

创建用户表

  1. drop table if exists dwd_user_info;
  2. create external table dwd_user_info(
  3. `id` string COMMENT 'id',
  4. `name` string COMMENT '',
  5. `birthday` string COMMENT '' ,
  6. `gender` string COMMENT '',
  7. `email` string COMMENT '',
  8. `user_level` string COMMENT '',
  9. `create_time` string COMMENT ''
  10. )
  11. PARTITIONED BY (`dt` string)
  12. stored as parquet
  13. location '/warehouse/gmall/dwd/dwd_user_info/'
  14. ;

创建支付流水表

  1. drop table if exists `dwd_payment_info`;
  2. create external table `dwd_payment_info`(
  3. `id` bigint COMMENT '',
  4. `out_trade_no` string COMMENT '',
  5. `order_id` string COMMENT '',
  6. `user_id` string COMMENT '',
  7. `alipay_trade_no` string COMMENT '',
  8. `total_amount` decimal(16,2) COMMENT '',
  9. `subject` string COMMENT '',
  10. `payment_type` string COMMENT '',
  11. `payment_time` string COMMENT ''
  12. )
  13. PARTITIONED BY ( `dt` string)
  14. stored as parquet
  15. location '/warehouse/gmall/dwd/dwd_payment_info/'
  16. ;

创建商品表(增加分类)

  1. drop table if exists dwd_sku_info;
  2. create external table dwd_sku_info(
  3. `id` string COMMENT 'skuId',
  4. `spu_id` string COMMENT 'spuid',
  5. `price` decimal(10,2) COMMENT '' ,
  6. `sku_name` string COMMENT '',
  7. `sku_desc` string COMMENT '',
  8. `weight` string COMMENT '',
  9. `tm_id` string COMMENT 'id',
  10. `category3_id` string COMMENT '1id',
  11. `category2_id` string COMMENT '2id',
  12. `category1_id` string COMMENT '3id',
  13. `category3_name` string COMMENT '3',
  14. `category2_name` string COMMENT '2',
  15. `category1_name` string COMMENT '1',
  16. `create_time` string COMMENT ''
  17. )
  18. PARTITIONED BY ( `dt` string)
  19. stored as parquet
  20. location '/warehouse/gmall/dwd/dwd_sku_info/'
  21. ;

DWD层数据导入脚本1)

在/root/bin目录下创建脚本dwd_db.sh

  1. [root@hadoop102 bin]$ vim dwd_db.sh

在脚本中填写如下内容

  1. #!/bin/bash
  2. # 定义变量方便修改
  3. APP=gmall
  4. # 如果是输入的日期按照取输入日期;如果没输入日期取当前时间的前一天
  5. if [ -n "$1" ] ;then
  6. do_date=$1
  7. else
  8. do_date=`date -d "-1 day" +%F`
  9. fi
  10. sql=" set hive.exec.dynamic.partition.mode=nonstrict; insert overwrite table "$APP".dwd_order_info partition(dt) select * from "$APP".ods_order_info where dt='$do_date' and id is not null; insert overwrite table "$APP".dwd_order_detail partition(dt) select * from "$APP".ods_order_detail where dt='$do_date' and id is not null; insert overwrite table "$APP".dwd_user_info partition(dt) select * from "$APP".ods_user_info where dt='$do_date' and id is not null; insert overwrite table "$APP".dwd_payment_info partition(dt) select * from "$APP".ods_payment_info where dt='$do_date' and id is not null; insert overwrite table "$APP".dwd_sku_info partition(dt) select sku.id, sku.spu_id, sku.price, sku.sku_name, sku.sku_desc, sku.weight, sku.tm_id, sku.category3_id, c2.id category2_id , c1.id category1_id, c3.name category3_name, c2.name category2_name, c1.name category1_name, sku.create_time, sku.dt from "$APP".ods_sku_info sku join "$APP".ods_base_category3 c3 on sku.category3_id=c3.id join "$APP".ods_base_category2 c2 on c3.category2_id=c2.id join "$APP".ods_base_category1 c1 on c2.category1_id=c1.id where sku.dt='$do_date' and c2.dt='$do_date' and c3.dt='$do_date' and c1.dt='$do_date' and sku.id is not null; "
  11. beeline -u "jdbc:hive2://hadoop102:10000/" -n hive -e "$sql"

2)增加脚本执行权限

  1. [root@hadoop102 bin]$ chmod 777 dwd_db.sh

3)采用脚本导入数据

  1. [root@hadoop102 bin]$ dwd_db.sh 2019-02-10

DWS层之用户行为宽表

1)为什么要建宽表
需求目标,把每个用户单日的行为聚合起来组成一张多列宽表,以便之后关联用户维度信息后进行,不同角度的统计分析。

创建用户行为宽表

  1. drop table if exists dws_user_action;
  2. create external table dws_user_action
  3. (
  4. user_id string comment '用户 id',
  5. order_count bigint comment '下单次数 ',
  6. order_amount decimal(16,2) comment '下单金额 ',
  7. payment_count bigint comment '支付次数',
  8. payment_amount decimal(16,2) comment '支付金额 '
  9. ) COMMENT '每日用户行为宽表'
  10. PARTITIONED BY (`dt` string)
  11. stored as parquet
  12. location '/warehouse/gmall/dws/dws_user_action/'
  13. tblproperties ("parquet.compression"="snappy");

用户行为数据宽表导入脚本

1)在/root/bin目录下创建脚本dws_db_wide.sh

  1. [root@hadoop102 bin]$ vim dws_db_wide.sh

在脚本中填写如下内容

  1. #!/bin/bash
  2. # 定义变量方便修改
  3. APP=gmall
  4. # 如果是输入的日期按照取输入日期;如果没输入日期取当前时间的前一天
  5. if [ -n "$1" ] ;then
  6. do_date=$1
  7. else
  8. do_date=`date -d "-1 day" +%F`
  9. fi
  10. sql=" with tmp_order as ( select user_id, count(*) order_count, sum(oi.total_amount) order_amount from "$APP".dwd_order_info oi where date_format(oi.create_time,'yyyy-MM-dd')='$do_date' group by user_id ) , tmp_payment as ( select user_id, sum(pi.total_amount) payment_amount, count(*) payment_count from "$APP".dwd_payment_info pi where date_format(pi.payment_time,'yyyy-MM-dd')='$do_date' group by user_id ) insert overwrite table "$APP".dws_user_action partition(dt='$do_date') select user_actions.user_id, sum(user_actions.order_count), sum(user_actions.order_amount), sum(user_actions.payment_count), sum(user_actions.payment_amount) from ( select user_id, order_count, order_amount, 0 payment_count, 0 payment_amount from tmp_order union all select user_id, 0 order_count, 0 order_amount, payment_count, payment_amount from tmp_payment ) user_actions group by user_id; "
  11. beeline -u "jdbc:hive2://hadoop102:10000/" -n hive -e "$sql"

2)增加脚本执行权限

  1. [root@hadoop102 bin]# chmod 777 dws_db_wide.sh

3)增加脚本执行权限

  1. [root@hadoop102 bin]# dws_db_wide.sh 2019-02-10

ADS层(需求:GMV成交总额)

建表语句

  1. drop table if exists ads_gmv_sum_day;
  2. create external table ads_gmv_sum_day(
  3. `dt` string COMMENT '统计日期',
  4. `gmv_count` bigint COMMENT '当日gmv订单个数',
  5. `gmv_amount` decimal(16,2) COMMENT '当日gmv订单总金额',
  6. `gmv_payment` decimal(16,2) COMMENT '当日支付金额'
  7. ) COMMENT 'GMV'
  8. row format delimited fields terminated by '\t'
  9. location '/warehouse/gmall/ads/ads_gmv_sum_day/'
  10. ;

数据导入脚本

1)在/root/bin目录下创建脚本ads_db_gmv.sh

  1. [root@hadoop102 bin]$ vim ads_db_gmv.sh

在脚本中填写如下内容

  1. #!/bin/bash
  2. # 定义变量方便修改
  3. APP=gmall
  4. # 如果是输入的日期按照取输入日期;如果没输入日期取当前时间的前一天
  5. if [ -n "$1" ] ;then
  6. do_date=$1
  7. else
  8. do_date=`date -d "-1 day" +%F`
  9. fi
  10. sql=" insert into table "$APP".ads_gmv_sum_day select '$do_date' dt, sum(order_count) gmv_count, sum(order_amount) gmv_amount, sum(payment_amount) payment_amount from "$APP".dws_user_action where dt ='$do_date' group by dt; "
  11. beeline -u "jdbc:hive2://hadoop102:10000/" -n hive -e "$sql"

2)增加脚本执行权限

  1. [root@hadoop102 bin]$ chmod 777 ads_db_gmv.sh

3)执行脚本

  1. [root@hadoop102 bin]$ ads_db_gmv.sh 2019-02-10

数据导出脚本

1)在MySQL中创建ads_gmv_sum_day表

  1. DROP TABLE IF EXISTS ads_gmv_sum_day;
  2. CREATE TABLE ads_gmv_sum_day(
  3. `dt` varchar(200) DEFAULT NULL COMMENT '统计日期',
  4. `gmv_count` bigint(20) DEFAULT NULL COMMENT '当日gmv订单个数',
  5. `gmv_amount` decimal(16, 2) DEFAULT NULL COMMENT '当日gmv订单总金额',
  6. `gmv_payment` decimal(16, 2) DEFAULT NULL COMMENT '当日支付金额'
  7. ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '每日活跃用户数量' ROW_FORMAT = Dynamic;

2)在/root/bin目录下创建脚本sqoop_export.sh

  1. [root@hadoop102 bin]$ vim sqoop_export.sh

在脚本中填写如下内容

  1. #!/bin/bash
  2. export HADOOP_USER_NAME=hive
  3. db_name=gmall
  4. export_data() {
  5. sqoop export \
  6. --connect "jdbc:mysql://hadoop102:3306/${db_name}?useUnicode=true&characterEncoding=utf-8" \
  7. --username root \
  8. --password Atguigu.123456 \
  9. --table $1 \
  10. --num-mappers 1 \
  11. --export-dir /warehouse/$db_name/ads/$1 \
  12. --input-fields-terminated-by "\t" \
  13. --update-mode allowinsert \
  14. --update-key $2 \
  15. --input-null-string '\\N' \
  16. --input-null-non-string '\\N'
  17. }
  18. case $1 in
  19. "ads_gmv_sum_day")
  20. export_data "ads_gmv_sum_day" "dt"
  21. ;;
  22. "all")
  23. export_data "ads_gmv_sum_day" "dt"
  24. ;;
  25. esac

3)增加脚本执行权限

  1. [root@hadoop102 bin]$ chmod 777 sqoop_export.sh

4)执行脚本导入数据

  1. [root@hadoop102 bin]$ sqoop_export.sh all

5)在SQLyog查看导出数据

  1. select * from ads_gmv_sum_day

发表评论

表情:
评论列表 (有 0 条评论,383人围观)

还没有评论,来说两句吧...

相关阅读