BIG DATA CONCEPT :-
Big data is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data-processing application software. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy and data source. Big data was originally associated with three key concepts: volume, variety, and velocity. When we handle big data, we may not sample but simply observe and track what happens. Therefore, big data often includes data with sizes that exceed the capacity of traditional software to process within an acceptable time and value.
Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem." Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on." Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet searches, fintech, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics, connectomics, complex physics simulations, biology and environmental research.
Data sets grow rapidly, to a certain extent because they are increasingly gathered by cheap and numerous information-sensing Internet of things devices such as mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks. The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; as of 2012, every day 2.5 exabytes (2.5×260 bytes) of data are generated. Based on an IDC report prediction, the global data volume will grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data. One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.
Relational database management systems, desktop statistics and software packages used to visualize data often have difficulty handling big data. The work may require "massively parallel software running on tens, hundreds, or even thousands of servers". What qualifies as being "big data" varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration.
Big data can be described by the following characteristics:
- The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not.
- The type and nature of the data. This helps people who analyze it to effectively use the resulting insight. Big data draws from text, images, audio, video; plus it completes missing pieces through data fusion.
- The speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time. Compared to small data, big data are produced more continually. Two kinds of velocity related to big data are the frequency of generation and the frequency of handling, recording, and publishing.
- It is the extended definition for big data, which refers to the data quality and the data value. The data quality of captured data can vary greatly, affecting the accurate analysis.
Data must be processed with advanced tools (analytics and algorithms) to reveal meaningful information. For example, to manage a factory, one must consider both visible and invisible issues with various components. Information generation algorithms must detect and address invisible issues such as machine degradation, component wear, etc. on the factory floor.
Other important characteristics of Big Data are:
- Whether the entire system (i.e., =all) is captured or recorded or not.
- Fine-grained and uniquely lexical
- Respectively, the proportion of specific data of each element per element collected and if the element and its characteristics are properly indexed or identified.
- If the data collected contains commons fields that would enable a conjoining, or meta-analysis, of different data sets.
- If new fields in each element of the data collected can be added or changed easily.
- If the size of the data can expand rapidly.
- The utility that can be extracted from the data.
- It refers to data whose value or other characteristics are shifting in relation to the context they are being generated.
LIST OF BIG DATA COMPANIES
This is an alphabetical list of notable IT companies using the marketing term big data:
- Alpine Data Labs, an analytics interface working with Apache Hadoop and big data
- Azure Data Lake is a highly scalable data storage and analytics service. The service is hosted in Azure, Microsoft's public cloud
- Big Data Partnership, a professional services company based in London
- Big Data Scoring, a cloud-based service that lets consumer lenders improve loan quality and acceptance rates through the use of big data
- BigPanda, a technology company headquartered in Mountain View, California
- Bright Computing, developer of software for deploying and managing high-performance (HPC) clusters, big data clusters, and OpenStack in data centers and in the cloud
- Clarivate Analytics, a global company that owns and operates a collection of subscription-based services focused largely on analytics
- Cloudera, an American-based software company that provides Apache Hadoop-based software, support and services, and training to business customers
- Compuverde, an IT company with a focus on big data storage
- CtrlShift, a Singapore-headquartered programmatic marketing company
- CVidya, a provider of big data analytics products for communications and digital service providers
- Databricks, a company founded by the creators of Apache Spark
- Dataiku, a French computer software company
- GridGain Systems
- Groundhog Technologies
- HPCC Systems
- Imply Corporation
- Oracle Cloud Platform
- Palantir Technologies
- Pentaho, a data integration and business analytics company with an enterprise-class, open source-based platform for big data deployments
- Pitney Bowes
- Rocket Fuel Inc.
- SAP SE, offers the SAP Data Hub to connect data bases of all kinds and runs their own big data solutions through an acquisition (Altiscale)
- Sense Networks
- Shanghai Data Exchange
- SK Telecom, developer of big data analytics platform Metatron Discovery
- Sumo Logic
- Zaloni, deployment and vendor agnostic data lake management platform