Data Technologies Drive Business Intelligence
Published on : Tuesday 10-05-2022
Big Data is a dominant driving force behind the success of enterprises and organisations across the globe, says Jasbir Singh.

The need of Big Data technologies arises, when the gathered data from multiple sources/systems becomes too voluminous to store systematically and be processed manually or by traditional way for future use. Many types of big data technologies are available for users to implement, which are linked to either of two major domains – operational and analytical. In the past, the generated data was mainly managed by programming languages. Due to continuous and exponential growth of the organisation's information, these programming languages are not efficient enough to handle the generated data in a structured format. Over a period, it became important to have efficient technologies to handle such a huge amount of data in a structured way to meet the future needs of organisations. Big data technology is basically a software tool to store, analyse, and interpret the massive and large size structured and unstructured data.
Data is mainly generated by the internet, through social networking, web search requests, text messages, media files, IoT devices and digital wireless sensors. The world continually generates nearly 2.5 quintillion bytes of data daily, with the last couple of years accounting for a major surge in the generation, as per a media report. The exponential growth of data generated globally in the past one decade has caused worries to organisations for its structured storage and meaningful use for analytics and future organisational benefits.
A. Big Data Technologies – Operational

Operational big data technologies mainly include data that people use for process applications. In other words, it is online transactions, use of social media platforms, and the data generated by an organisation or a firm, required for the analysis by using the big data technologies software. The data collected is raw data, which is used as input for several analytical big data technologies. This includes online ticket booking systems, online trading or shopping from e-commerce websites, using online data on social media sites and daily data generated by the employees of organisations. Operational big data technologies include information from MNC management, Amazon, Flipkart, Walmart, and many more.
B. Big Data Technologies – Analytical
Big Data analytics is a technology that sees all aspects of business and derives the best way to make things work efficiently. It can optimise processes to get the best result out of it. Analytical big data technology is used when performance criteria have some target and rapid business decisions are required to be taken based on operational-real time data/information. This will support prompt decision making by showing trends of the past and forecast of probable happenings. Actual business decisions can be taken promptly with high accuracy. The common examples are stock market data, weather forecast based decisions, healthcare where the medical health of individuals can be monitored by doctors for prediction-based advice, and space mission data for microwave observations of storm precipitation, temperature, and humidity, quickly in a short time. This data helps scientists to apprehend the driving factors for tropical cyclone intensification and provide assistance in weather forecasting models and helps in space missions for satellite and rocket projection to space. The data helps for forecasting storms many days in advance by precise measurements of the temperature and moisture variation in the atmosphere, ocean surface temperatures. The critical data available is used by atmospheric weather forecasters and first responders. The data originated from earth weather satellites used to forecast smog, floods, volcanoes, wildfires, dust storms, and sea ice on earth's atmosphere.
Emerging big data technologies

a. TensorFlow: TensorFlow has comprehensive libraries, flexible ecosystem tools, and ecosystem of resources for researchers, supporting them to develop and deploy a unique Machine Learning application.
b. Beam: Apache Beam provides an API layout to build sophisticated Parallel Data Processing pipelines using various Execution Engines or Runners.
c. Docker: Docker is one of the tools developed to create Big Data management which makes the development, deployment and running of container applications easier. Containers help developers stack libraries and other dependency applications along with all the required components they need. Containers used to bind all components and transport them all together as a package.
d. Airflow: Apache Airflow is a Process Management technology used for workflow automation and scheduling for the management of data pipelines. Airflow utilises these job workflows as Directed Acyclic Graphs management consists of multiple tasks. The developer uses this code of workflows for easy to manage, validate and create versions for a large amount of inflow data.
e. Kubernetes
Kubernetes is one of the open-source vendor-agnostic cluster and container management tools for Big Data. It was developed by Google in 2014. It provides a platform for the automation, deployment, scaling escalation and execution of application container systems through host clusters.
f. Blockchain: Blockchain technology of Big Data management provides a unique data safe feature so that information cannot be deleted or modified after it is written. It provides a highly secured environment for numerous applications of Big Data management in industries like banking, insurance, finance, medical, retail and many more.
Other technologies interconnected with Big Data technologies are Data Storage, Data Science, Data Mining, Data Visualisation, Cloud Computing, Data Analytics, Machine Learning, Deep Learning, etc. These are linked to business intelligence by handling large amounts of data from multiple sources.
Major components of Big Data technology are:
A. Data Storage
B. Data Mining
C. Data Visualization, and
D. Data Analytics.
Data Storage
It is difficult to manage and store all the information effectively on the server. Traditional way of searching a particular information/data from an entire database is an intimidating task. The new data is continually being added regularly with time. Companies add new data to their existing database, and don’t delete the previous stored data, whatever outdated it might be. They think it can be relevant at a later period. But this thought multiplies problems further to the company. This difficulty can be overcome by data storage software tools. This software tool allows companies to safely and systematically store with regular backup of data. Data storage software solutions are necessary for organisations that are daily getting added with new data from various sources. The software automated programs decide the current status of data, and properly assign them suitable for the companies to archives.
Data storage tools include: NetAapp; Open-E; Everteam; StorPool; Tintri; and Talon Storage.
Data Mining
Wikipedia defines Data Mining as a process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualisation, and online updating. It is done by:
i. Classification: This data mining technique helps to classify data in different classes.
ii. Clustering: Clustering is a division of information into groups of connected objects.
iii. Regression: Regression analysis is the data mining process used to identify and analyse the relationship between variables because of the presence of the other factors.
iv. Association Rules: It finds a hidden pattern in the data set.
v. Outer detection: This type of data mining technique relates to the observation of data items in the data set, which do not match to an expected pattern or expected behaviour.
vi. Sequential Patterns: The sequential pattern is a data mining technique specialised for evaluating sequential data to discover sequential patterns.
vii. Prediction: Prediction uses a combination of other data mining techniques such as trends, clustering, classification, etc.
Data Visualisation
As per Wikipedia, Data Visualisation (often abbreviated data viz) is an interdisciplinary field that deals with the graphic representation of data. It is a particularly efficient way of communicating when the data is numerous as for example a time series. Data Visualisation provides a way for analysts to present data in an interactive way.
Common visualisation techniques are tables, pie charts and stacked bar charts, line graphs and area charts, histograms, scatter plots, heat maps, tree maps and so on. Some of the open-source visualisation tools are D3.js, ECharts, Vega and deck.gl
Data Analytics
Wikipedia defines Data Analysis as a process of inspecting, cleansing, transforming, and modelling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today's business world, data analysis plays a role in making decisions more scientifically and helping businesses operate more effectively.
A few big data technologies analytics software are Apache Kafka, Splunk, KNIME, Spark, R-Language and Blockchain.
Big Data has penetrated almost every industry today and is a dominant driving force behind the success of enterprises and organisations across the globe.

Jasbir Singh is an Automation Expert having long experience in Factory Automation, Line Automation, Implementation Strategist, Business Coach, Regular writer on automation, Artificial Intelligence, Robots/Cobots, Digital Technology, Network Communication, Industrial Internet of Things (IIoT), Wireless Communication, Block Chain and use of advance digital technologies. He has established a long association with Business Houses/large production houses to improve factory automation in their production lines as well as productivity improvement in factories in India and overseas; and in advising and designing the units to transform into digital platforms by use of Artificial Intelligence. Email: [email protected]