Power Bi adding adding more types of slicers to reports. The drop down slicer. This Slicer is use full for when we have a lots of data to locate.
Spark’s dataframe after which we will again store it back to the mongodb collection.
we will produce some Data in Json format which we will store to mongoDb.
The following creates a DataFrame based on the content of a CSV file.
It demonstrates the basic functionality of Spark 2.0.1. We also describe SparkSession, Spark SQL and DataFrame API functionality.
In this post, I am going to explain, how ‘Elasticsearch Character Filter’ work. So there are following steps to done this
Apache Spark is a powerful open source processing engine built around speed, ease of use, and sophisticated analytics.
Java is the primary requirement to running hadoop on system, so make sure you have Java installed on your system
Download csv from the following link and ingest into elasticsearch either using curl or you can follow my last blog to insert spreadsheet data into elasticsearch directly.
Already have Lightbend Activator (get it here)? Launch the UI then search for graphx-scala in the list of templates.
This tutorial provides a quick introduction to using Spark. It demonstrates the basic functionality of RDD and DataFrame API
This activator project describes a classic CRUD application with Play 2.4.x, Scala and RethinkDB
Already have Lightbend Activator (get it here)? Launch the UI then search for akka-scheduler in the list of templates.
SparkSQL and DataFrames Operations API Example Using Different types of Functionalities.
Spark SQL is a component on top of Spark Core that introduces a new data abstraction called SchemaRDD
The Scala interface for Spark SQL supports automatically converting an RDD containing case classes to a DataFrame.
In this blog we are elaborate how to ingest data from Google spreadsheet to Elasticsearch.
Configures the Elasticsearch server settings. This is where all options, are stored.
A high-throughput distributed messaging system is designed to allow a single cluster to serve as the central data.
Modern functional applications architecture requires slew of silo pieces working collectively in harmony based on “PnP (Plug n Play)” architecture.
GraphX provide distributed in-memory computing. The GraphX API enables users to view data both as graphs and as collections.
We have created Classic CRUD application using Play 2.4.x , Scala and RethinkDB.
Provide a simple batch like API for implementing complex algorithms.
2016 © All Rights Reserved.