Notifying users when the pipeline ran successfully or whenever an issue has occurred is one of the key components of an ETL pipeline. In this blog, we have covered a step-by-step guide on how to send emails to multiple recipients using SparkPost in Databricks.
How to Pause and Resume Dedicated SQL pool (SQL DW) using Azure Data factory.
Azure Dedicated SQL pool (SQL DW) has many benefits like Massive parallel processing (MPP) architecture to scale compute and storage resources independently, allowing for high-performance analytics but one big issue with this Azure resource is the cost associated with it. Microsoft charges Dedicated SQL Pool on Hourly basis, so it means then you must pay when the SQL DW is on even if you are not using it for development or analysis.
What are latest trends in the field of data analytics?
The field of data analytics is evolving rapidly, and new trends and technologies are emerging every month, below are the latest trends as of now: Augmented Analytics: Augmented analytics combines artificial intelligence (AI) and machine learning (ML) techniques with data analytics to automate data preparation, insight generation, and data visualization. It empowers non-technical users to... Continue Reading →
Will Microsoft Copilot change the Cloud Computing world?
Microsoft Copilot is an AI-powered code completion tool developed by Microsoft in collaboration with OpenAI that uses machine learning to provide code suggestions and completions to developers as they write code. It is designed to help developers be more productive by automating some of the more repetitive tasks of coding. It has the potential to... Continue Reading →
19 Best Practices of Power BI
Data models should be filtered and normalized. This is particularly important if datasets are sourced from high data volume repositories such as enterprise data warehouses.
How to handle duplicate records while inserting data in Databricks
Have you ever faced a challenge where records keep getting duplicated when you are inserting some new data into an existing table in Databricks? If yes, then this blog is for you. Let’s start with a simple use case: Inserting parquet data from one folder in Datalake to a Delta table using Databricks. Follow the... Continue Reading →
How to Copy Files from SharePoint to Datalake using Azure Data factory
Copying files from SharePoint to Datalake or any other target location is one task you cannot ignore as a Data Engineer. Someday someone for sure is going to ask you to do that. So, how can you achieve that? Suppose we need to copy an excel file from SharePoint to Datalake Gen 2. You need... Continue Reading →
How to read .xlsx file in Databricks using Pandas
Step 1: In order to read .xlsx file, you need to have the library openpyxl installed in the Databricks cluster. Steps to install library openpyxl to Databircks cluster: Step 1: Select the Databricks cluster where you want to install the library. Step 2: Click on Libraries. Step 3: Click on Install New. Step 4: Select PyPI. Step 5: Put openpyxl in the text box under Package... Continue Reading →
Basic concepts anyone starting in SQL should know about
SQL stands for Structured Query Language. SQL is a database management language for relational databases. SQL lets you access and manipulate databases. SQL queries are not case sensitive. (SELECT = select) Division of SQL: Data Manipulation Language (DML) is used to add, update or delete data . Examples: INSERT, DELETE and UPDATE. Data Definition Language (DDL) is used... Continue Reading →
How to Decrypt PGP Encrypted files in Databricks
As a Data Engineer you may come across a project where you need to Decrypt the PGP Encrypted files in order to get the data and apply transformation logic on it
How to Create Login and Password in SQL Server
This post is to help you create a new login and password for the users who want to use the data in SQL database or Warehouse for Analysis and Reporting.
How to read .csv and .xlsx file in Databricks
How to read .xlsx file: Step 1: In order to read .xlsx file, you need to have the library com.crealytics:spark-excel_2.11:0.12.2 installed in the Databricks cluster. Steps to install library com.crealytics:spark-excel_2.11:0.12.2 to Databircks cluster: Step 1: Select the Databricks cluster where you want to install the library. Step 2: Click on Libraries. Step 3: Click on... Continue Reading →