Alibaba ACP Big Data (ACP-BigData1) Certification Exam Syllabus

ACP-BigData1 Dumps Questions, ACP Big Data PDF, ACP Big Data Exam Questions PDF, Alibaba ACP Big Data Dumps Free, ACP Big Data Official Cert Guide PDF, Alibaba ACP-BigData1 Dumps, Alibaba ACP-BigData1 PDFThe Alibaba ACP Big Data exam preparation guide is designed to provide candidates with necessary information about the ACP-BigData1 exam. It includes exam summary, sample questions, practice test, objectives and ways to interpret the exam objectives to enable candidates to assess the types of questions-answers that may be asked during the Alibaba Big Data (ACP) exam.

It is recommended for all the candidates to refer the ACP Big Data objectives and sample questions provided in this preparation guide. The Alibaba ACP-BigData1 certification is mainly targeted to the candidates who want to build their career in Big data domain and demonstrate their expertise. We suggest you to use practice exam listed in this cert guide to get used to with exam environment and identify the knowledge areas where you need more work prior to taking the actual Alibaba ACP Big Data exam.

Alibaba ACP Big Data Exam Summary:

Exam Name
Alibaba ACP Big Data (ACP-BigData1)
Exam Code ACP-BigData1
Exam Price $120 USD
Duration 120 minutes
Number of Questions 60
Passing Score 65/100
Recommended Training / Books
Schedule Exam PEARSON VUE
Sample Questions Alibaba ACP Big Data Sample Questions
Recommended Practice Alibaba Big Data (ACP) Practice Test

Alibaba ACP Big Data Syllabus:

Section Objectives
MaxCompute - Familiar with big data computing services basic concepts, including project, table, partition, resources, task, etc.
- Understand big data computing services including the composition of the structure and function of each component.
- Master the characteristics, advantages and application scenarios of Alibaba Cloud big data computing services.
- Know how to connect and use the computing services, including the use of client odpscmd, management console, Java SDK, etc.
- Know how to do the big data computing service data upload and download, can use tunnel command line tools, understand the Tunnel SDK
- Know how to use SQL commands for large data computing, including DDL, DML, and common built-in functions?
- Familiar with user-defined functions, including UDF, UDAF, and UDTF, able to write simple custom functions.
- Familiar with the MapReduce programming framework, can setup IntelliJ IDEA integrated development environment, and write a simple MapReduce program.
- Understand the Graph programming framework, including basic concepts, processing procedures, can write a simple Graph program.
- Familiar with the concept and practical operation of the security and permission management of MaxCompute, including users, roles, authorization (ACL & Policy), project space protection, external and security level, etc.
DataWorks - Familiar with the basic functions of DataWorks, including data Integration, data development, data management, operation & maintenance center, organization management and project management.
- Understand the basic features of DataWorks, including role isolation, environment isolation, etc.
- Has knowledge about how to leverage project management and organizational management modules to build data analysis environment.
- Proficient in the design and development of data development module of DataWorks, including construction table, task development, resource upload, data upload, new functions, etc.
- Able to use DataWorks' data development module for workflow task and node task development and design, can configure appropriate dependencies and periodic scheduling.
- Able to use the data management module for data management, including linage analysis, application and authorization of use of table, etc.
- Able to fix the basic problems by identifying and locating the problems in the process.
E-MapReduce - Know about the basic distributed system theory, like the concept of distributed file system and distributed computing framework.
- Know how the common components in Hadoop ecosystem work, e.g. distributed file system (HDFS), computation framework (MapReduce), resource management component (YARN) and resource coordination component (Zookeeper).
- Familiar with the basic concepts of each component of EMapReduce, including YARN, Spark, Zookeeper, Kafka, etc.
- Familiar with Auto Scaling features, product advantages and common application scenarios.
Alibaba Big Data Ecosystem tools - Has knowledge about the Alibaba Cloud Machine Learning Platform for AI (PAI).
- Know about the Alibaba Cloud Steaming data process solution and products (Realtime Compute, Apache Flink).
- Familiar with the basic concepts of Quick BI and has knowledge about workflow of how to use Quick BI get better insight of data.
- Understand the features and application scenarios of other related products, including Alibaba Cloud RDS, distributed relational database DRDS, Table Store, Analytic database, Data Transmission Service (DTS), Realtime Compute and DataV, etc.
- Understand how DataWorks data integration synchronizes data with other related products?
Your rating: None Rating: 5 / 5 (77 votes)