image
The Ultimate Drawing Course Beginner to Advanced...
$179
$79
image
User Experience Design Essentials - Adobe XD UI UX...
$179
$79
Total:
$659

Description

Python, SQL, and Tableau are three of the most widely used tools in the world of data science.
Python
is the leading programming language;
SQL
is the most widely used means for communication with database systems;
Tableau
is the preferred solution for data visualization;
To put it simply –
SQL
helps us store and manipulate the data we are working with,
Python
allows us to write code and perform calculations, and then
Tableau
enables beautiful data visualization. A well-thought-out integration stepping on these three pillars could save a business millions of dollars annually in terms of reporting personnel.
Therefore, it goes without saying that
employers are looking for Python, SQL, and Tableau when posting Data Scientist and Business Intelligence Analyst job descriptions
. Not only that, but they would want to find a candidate who knows how to use these three tools simultaneously. This is how recurring data analysis tasks can be automated.
So, in this course
we will to teach you how to integrate Python, SQL, and Tableau.
An essential skill that would give you an edge over other candidates. In fact, the best way to differentiate your job resume and get called for interviews is to acquire relevant skills other candidates lack. And because, we have prepared a topic that hasn’t been addressed elsewhere, you will be picking up a skill that truly has the potential to differentiate your profile.
Many people know how to write some code in Python.
Others use SQL and Tableau to a certain extent.
Very few, however, are able to see the full picture and
integrate Python, SQL, and Tableau providing a holistic solution.
In the near future, most businesses will automate their reporting and business analysis tasks by implementing the techniques you will see in this course. It would be invaluable for your future career at a corporation or as a consultant, if you end up being the person automating such tasks.
Our experience in one of the large global companies showed us that a consultant with these skills could charge a four-figure amount per hour. And the company was happy to pay that money because the end-product led to significant efficiencies in the long run.
The course starts off by introducing software integration as a concept. We will discuss some important terms such as servers, clients, requests, and responses. Moreover, you will learn about data connectivity, APIs, and endpoints.
Then, we will continue by introducing   the real-life example exercise the course is centered around – the ‘Absenteeism at Work’ dataset. The
preprocessing
part that follows will give you a taste of how BI and data science look like in real-life on the job situations. This is extremely important because a significant amount of a data scientist’s work consists in preprocessing, but many learning materials omit that
Then we would continue by applying some
Machine Learning
on our data. You will learn how to explore the problem at hand from a machine learning perspective, how to create targets, what kind of statistical preprocessing is necessary for this part of the exercise, how to train a Machine Learning model, and how to test it. A truly comprehensive ML exercise.
Connecting Python and SQL is not immediate. We have shown how that’s done in an entire section of the course. By the end of that section, you will be able to transfer data from Jupyter to Workbench.
And finally, as promised, Tableau will allow us to visualize the data we have been working with. We will prepare several insightful charts and will interpret the results together.
As you can see, this is a truly comprehensive data science exercise.
There is no need to think twice. If you take this course now, you will acquire invaluable skills that will help you stand out from the rest of the candidates competing for a job.
Also, we are happy to offer a 30-day unconditional no-questions-asked-money-back-in-full guarantee that you will enjoy the course.
So, let’s do this! The only regret you will have is that you didn’t find this course sooner!
Who this course is for:
Intermediate and advanced students
Students eager to differentiate their resume
Individuals interested in a career in Business Intelligence and Data Science

What you'll learn

How to use Python, SQL, and Tableau together

Software integration

Data preprocessing techniques

Apply machine learning

Create a module for later use of the ML model

Connect Python and SQL to transfer data from Jupyter to Workbench

Visualize data in Tableau

Analysis and interpretation of the exercise outputs in Jupyter and Tableau

Requirements

  • You will need a copy of Adobe XD 2019 or above. A free trial can be downloaded from Adobe.
  • No previous design experience is needed.
  • No previous Adobe XD skills are needed.

Course Content

27 sections • 95 lectures
Expand All Sections
1-Introduction
1
1.1-What Does the Course Cover?
2-What is software integration?
10
2.1-Properties and Definitions: Data, Servers, Clients, Requests and Responses
2.2-Properties and Definitions: Data, Servers, Clients, Requests and Responses
2.3-Properties and Definitions: Data Connectivity, APIs, and Endpoints
2.4-Properties and Definitions: Data Connectivity, APIs, and Endpoints
2.5-Further Details on APIs
2.6-Further Details on APIs
2.7-Text Files as Means of Communication
2.8-Text Files as Means of Communication
2.9-Definitions and Applications
2.10-Definitions and Applications
3-Setting up the working environment
11
3.1-Setting Up the Environment - An Introduction (Do Not Skip, Please)!
3.2-Why Python and why Jupyter?
3.3-Why Python and why Jupyter?
3.4-Installing Anaconda
3.5-The Jupyter Dashboard - Part 1
3.6-The Jupyter Dashboard - Part 2
3.7-Jupyter Shortcuts
3.8-The Jupyter Dashboard
3.9-Installing sklearn
3.10-Installing Packages - Exercise
3.11-Installing Packages - Solution
4-What's next in the course?
5
4.1-Up Ahead
4.2-Real-Life Example: Absenteeism at Work
4.3-Real-Life Example: The Dataset
4.4-Real-Life Example: The Dataset
4.5-Important Notice Regarding Datasets
5-Preprocessing
33
5.1-What to Expect from the Next Couple of Sections
5.2-Data Sets in Python
5.3-Data at a Glance
5.4-A Note on Our Usage of Terms with Multiple Meanings
5.5-ARTICLE - A Brief Overview of Regression Analysis
5.6-Picking the Appropriate Approach for the Task at Hand
5.7-Removing Irrelevant Data
5.8-EXERCISE - Removing Irrelevant Data
5.9-SOLUTION - Removing Irrelevant Data
5.10-Examining the Reasons for Absence
5.11-Splitting a Column into Multiple Dummies
5.12-EXERCISE - Splitting a Column into Multiple Dummies
5.13-SOLUTION - Splitting a Column into Multiple Dummies
5.14-ARTICLE - Dummy Variables: Reasoning
5.15-Dummy Variables and Their Statistical Importance
5.16-Grouping - Transforming Dummy Variables into Categorical Variables
5.17-Concatenating Columns in Python
5.18-EXERCISE - Concatenating Columns in Python
5.19-SOLUTION - Concatenating Columns in Python
5.20-Changing Column Order in Pandas DataFrame
5.21-EXERCISE - Changing Column Order in Pandas DataFrame
5.22-SOLUTION - Changing Column Order in Pandas DataFrame
5.23-Implementing Checkpoints in Coding
5.24-EXERCISE - Implementing Checkpoints in Coding
5.25-SOLUTION - Implementing Checkpoint in Coding
5.26-Exploring the Initial "Date" Column
5.27-Using the "Date" Column to Extract the Appropriate Month Value
5.28-Introducing "Day of the Week"
5.29-EXERCISE - Removing Columns
5.30-Further Analysis of the DataFrame: Next 5 Columns
5.31-Further Analysis of the DaraFrame: "Education", "Children", "Pets"
5.32-A Final Note on Preprocessing
5.33-A Note on Exporting Your Data as a *.csv File
6-Machine Learning
16
6.1-Exploring the Problem from a Machine Learning Point of View
6.2-Creating the Targets for the Logistic Regression
6.3-Selecting the Inputs
6.4-A Bit of Statistical Preprocessing
6.5-Train-test Split of the Data
6.6-Training the Model and Assessing its Accuracy
6.7-Extracting the Intercept and Coefficients from a Logistic Regression
6.8-Interpreting the Logistic Regression Coefficients
6.9-Omitting the dummy variables from the Standardization
6.10-Interpreting the Important Predictors
6.11-Simplifying the Model (Backward Elimination)
6.12-Testing the Machine Learning Model
6.13-How to Save the Machine Learning Model and Prepare it for Future Deployment
6.14-ARTICLE - More about 'pickling'
6.15-EXERCISE - Saving the Model (and Scaler)
6.16-Creating a Module for Later Use of the Model
7-Installing MySQL and Getting Acquainted with the Interface
5
7.1-Installing MySQL
7.2-Additional Note - Installing Visual C
7.3-Installing MySQL on macOS and Unix systems
7.4-Setting Up a Connection
7.5-Introduction to the MySQL Interface
8-Connecting Python and SQL
12
8.1-Are you sure you're all set?
8.2-Implementing the 'absenteeism_module' - Part I
8.3-Implementing the 'absenteeism_module' - Part II
8.4-Creating a Database in MySQL
8.5-Importing and Installing 'pymysql'
8.6-Creating a Connection and Cursor
8.7-EXERCISE - Create 'df_new_obs'
8.8-Creating the 'predicted_outputs' table in MySQL
8.9-Running an SQL SELECT Statement from Python
8.10-Transferring Data from Jupyter to Workbench - Part I
8.11-Transferring Data from Jupyter to Workbench - Part II
8.12-Transferring Data from Jupyter to Workbench - Part III
9-Analyzing the Obtained data in Tableau
6
9.1-EXERCISE - Age vs Probability
9.2-Analysis in Tableau: Age vs Probability
9.3-EXERCISE - Reasons vs Probability
9.4-Analysis in Tableau: Reasons vs Probability
9.5-EXERCISE - Transportation Expense vs Probability
9.6-Analysis in Tableau: Transportation Expense vs Probability
10-Bonus lecture
1
10.1-Bonus Lecture: Next Steps