PDI clusters – Part 1 : How to build a simple PDI cluster


I would like to start a collection of posts dedicated to PDI / Kettle clustering.
After surfing the web, I noticed a lot of people is asking how to build PDI clusters, how to test and deploy them in a production environment. Also a lot of questions about Carte usage. So, I will try to make some tutorials about this fantastic feature offered by PDI.
At that time, I want to recommend you a book : “Pentaho Solutions – Business Intelligence and Datawarehousing with Pentaho and MySQL”, written by Roland Bouman and Jos Van Dongen. This book is a fantastic source of knowledge about Pentaho and will help you understanding the Pentaho ecosystem and tools. My complete review about this book here.

Agenda

      • How to build a simple PDI cluster (1 master, 2 slaves). This post.
      • How to build a simple PDI server on Amazon Cloud Computing (EC2).
      • How to build a PDI cluster on Amazon Cloud Computing (EC2).
      • How to build a dynamic PDI cluster on Amazon Cloud Computing (EC2).

This first post is about building a simple PDI cluster, composed of 1 master and 2 slaves, in a virtualized environment (vmware).
After this article, you will be able to build your PDI cluster and play with it on a simple laptop of desktop (3 giga of ram is a must have).

Why PDI clustering ?

Imagine you have to make some very complex transformations and finally load a huge amout of data into your target warehouse.
You have two solutions to handle this task :

  • SCALE UP : Build a strong unique PDI server with a lot of RAM and CPU. This unique server (let’s call it an ETL hub) will handle all the work by itself.
  • SCALE OUT : Create an array of smaller servers. Each of them will handle a small part of the work.

Clustering is scaling out. You divide the global workload and distribute it accross many nodes, these smaller tasks will be processed in parallel (or near parallel). The global performance equals the slowest node of your cluster.
If we consider PDI, a cluster is composed of :

  • ONE MASTER : this node is acting like a conductor, assigning the sub-tasks to the slaves and merging the results coming back from the slaves when the sub tasks are done.
  • SLAVES : from 1 to many. The slaves are the nodes that will really do the job, process the tasks and then send back the results to the master for reconciliation.

Let’s have a look to this schema. You can see the typical architecture around a PDI cluster : data sources, the master, the registered slaves and the target warehouse. The more PDI slaves you implement, the better parallelism / performance you have.

The virtual cluster

Let’s build our first virtual cluster now. First, you will need vmware or virtual box (or virtual PC from Ms). I use vmware, so from now I will speak about vmware only, but you can transpose easily. I decided to use Suse Enterprise Linux 11 for these virtual machines. It is a personal choice, but you can do the same with Fedora, Ubuntu, etc …

Let’s build 3 virtual machines :

  • The Master : Suse Enterprise Linux 11 – this machine will host PDI programs and PDI repository, a mysql database with phpmyadmin (optional).
  • The Slave 1 : Suse Enterprise Linux 11 – this machine will host PDI programs and will run carte.
  • The Slave 2 : Suse Enterprise Linux 11 – this machine will host PDI programs and will run carte.

As you can see below, the three virtual machines are located on the same subnet, using fixed IP adresses ranging from 192.168.77.128 (Master) to 192.168.77.130 (Slave 2). On the vmware side, I used a “host only” network connection. You have to be able to ping your master from the two slaves, ping the two slaves from the master and also ping the three virtual machines from your host. The easiest way is to disable the firewall on each Suse machine because we don’t need security for this exercise.

The Master configuration

As I said, the Master virtual machine is hosting PDI, a mysql database and the PDI repository. But let’s have a closer look to the internal configuration, especially with the Carte program config files.
From Pentaho wiki, Carte is “a simple web server that allows you to execute transformations and jobs remotely”. Carte is a major component when building clusters because this program is a kind of a middleware between the Master and the Slave servers : the slaves will register themselves with the Master by notifying they are ready to receive tasks to process. On top of that, you can reach Carte web service to remotely monitor, start and stop transformations / jobs. You can learn more on Carte from the Pentaho wiki.

The picture below explains the registration process between slaves and a master.

Master Slave registration

On the Master, two files are very important. The files are configuration files, written in XML. They are self explanatory, easy to read :

  • Repositories.xml : your slave must have a valid repositories.xml file, updated with all informations about your repository connexion (hosted on the Master for this example). See below for my config file.
  • Carte xml configuration file : located in /pwd/, this file contains only one section for defining the cluster master (ip, port, credentials). In the /pwd/ directory, you will find some example configuration files. Pick one, for instance the one labelled “8080” and apply the changes described below. I will keep the port 8080 for communication between the Master and the two Slaves. See below for my config file.

Repositories.xml on Master

image
Carte xml configuration file on Master
image

The Slave configuration

As I said, the two Slave virtual machines are hosting PDI. Now let’s have a look on how to configure some very important files, the same files we changed for the Master.

  • Repositories.xml : your slave must have a valid repositories.xml file, updated with all informations about your repository (hosted on the Master for this example). See below for my config file.
  • Carte xml configuration file : located in /pwd/, this file contains two sections : the master section and the slave section. In the /pwd/ directory, you will find some example configuration files. Pick one, for instance the “8080” one and apply the changes described below. Note that the default user and password for Carte is cluster / cluster. Here again the file is self explanatory, see below for my config file.

Repositories.xml on Slave1 and Slave2 :
Same as for the Master, see above.

Carte xml configuration file on Slave1 (note address is 192.168.77.128, don’t write “localhost” for Slave1)

image
Carte xml configuration file on Slave2 (note : address is 192.168.77.130, don’t write “localhost” for Slave2)

image
Starting everything

Now it is time to fire the programs. I assume you have already started mysql and your PDI repository is active and reachable by PDI. It is quite recommended that you work with a repository hosted on a relational db. Let’s fire Carte on the Master first. The command is quite simple : ./carte.sh [xml config file].

image
This output means that your Master is running and a listener is activated on the Master adress (ip address) on port 8080. Now let’s start the two slaves. Here again, the command is simple : ./carte.sh [xml config file]. Look below the output for the Slave1, you can see that Carte has now registered Slave1 (192.168.77.129) to the master server . Everything is working fine so far.

image
Finally the output for Slave2. Look below the output for the Slave2, you can see that Carte has now registered Slave2 (192.168.77.130) to the master server . Everything is fine so far here again.

image
At that point, we have a working Master and two registered slaves (Slave1 and Slave2) waiting to receive tasks from the Master. It is time, now, to create the cluster array and a PDI transformation (and a job to run it). Let's go for it.

 

PDI configuration

First we have to declare the slaves previously created and started. That's pretty easy. Let's select the Explorer mode on the left pane. Do a left click on the "Slave server" folder, this will pop up a new window in which you will declare Slave1 like below.

 

Repeat the same operation for Slave1 and Slave2 in order to have 3 registered servers like the picture above. Don’t forget to type the right ip port (we are working with 8080 since the begining of this exercise).

Now we have to declare the cluster. Right click on the cluster folder (next folder) and choose New. This will pop up a new window in which you will fill the cluster parameters : Just type a new name for your cluster and then click on the “select servers” button. Now choose your three servers and click ok. You will then notice your cluster is created (Master and Slave) like below.

 

 

Creating a job for testing the clusterFor this exercice, I won't create a job but will use an existing one created by Matt Casters. This transformation is very interesting and will only read data from a flatfile and compute statistics in a target flatfile (rows/sec, throuput ...) for each slave. You can download this transformation here, the job here and the flat file here (21 Mo zipped).

I assume you know how link a transformation into a job. Don't forget to change the flatfile location on source (/your_path/lineitem.tbl) and on destination (/your_path/out_read_lineitems). Then, for each of the first four steps, right click and assign the cluster (you named previously, see above) to the step. You will see the caption “Cx2” on top right of each icon. There is nothing else to change. Here is a snapshot of the contextual menu when assigning the cluster to the transformation steps (my PDI release is in french, so you have to look at “Clustering” instead of “Partitionnement”).

Clustering steps

 

Have a look to the transformation below. The caption “Cx2” on top right of the first four icons means you have assigned your cluster to run these steps. On the contrary, the JavaScript step “calc elapsed time” won’t run on the cluster but on the Master only.


And have a look to the job (calling the transformation above). This is a typical job, involving a start step and the “execute transformation” step. We will start this job with Kitchen later.

Main Job

 

Running everything

Now it is time to run the job/transformation we made. First we will see how to run the transformation within Spoon, the PDI gui. Then we will see how to run the job (containing the transformation) with pan in the linux console and how to interpret the console output.

First, how to start the transformation within Spoon. Simply click on the green play symbol. The following window will prompt at your screen. Once again, my screen is in french, sorry for that. All you have to do/check is to click on the top right button to select the clustering execution (“Exécution en grappe” in french). I suppose you are already quite familiar with that screen so I won’t continue explaining it.

Start_Transformation

 

Then you can run the transformation. Let’s have a look at the Spoon trace (don’t forget to display your output window in PDI, and select the Trace tab).

image
This trace is fairly simple. First we can see that the Master (ip .128)found his two slaves (ip .129 and ip .130) and the connexion is working well. The Master and the two Slaves are communicating all along the process. As soon as the two Slaves have finished their work, we receive a notification '(All transformations in the cluster have finished”), then we can read a small summary (nb of rows).

Let’s have a look on the Master command line (remember we started Carte by using the Linux command line). For the Master, we have a very short output. The red lines are familiar to you now, they correspond to Carte startup we did a few minutes ago. Have a look below on the green lines : these lines were printed out by Carte while the cluster was processing the job. image

Let’s have a look at Slave 1 output. Here again, the red lines are coming from Carte Startup. The green lines are interesting : you can see Slave 1 receiving its portion of the job to run … and how he did it by reading rows (packets of 50000). You can also notice the step names that were processed by the Slave 1 in cluster mode : lineitem.tbl (reading flatfile), current_time (catch current time), min/max time and slave_name. If you remember well, these steps were flagged with a “Cx2’” on their icon on top right corner (see below) when you assigned your cluster to the transformation steps.

Slave icons

image

The output for Slave 2, displayed below, is very similar to Slave 1.

image

That’s very funny to do ! Once you started Carte and created your cluster, you are ready to execute the job. Then you will see your linux console printing informations while the job is being executed by your slaves. This post is about understanding and creating the whole PDI cluster mecanism, I won’t talk about optimization for the moment.

 

Hey, what’s the purpose of my transformation ?

As I said before, this transformation will only read records from a flatfile (lineitem.tbl) and compute performance statistics for every slave like rows/secs, throuput … The last step of your transformation will create a flatfile containing these stats. Have a look at it.

image

Once formated with a spreadsheet tool, the stats will look like this.

Stat file

Don’t pay too much attention to the start_time and end_time timestamps : the time setup was not done on my three virtual machines, hence they are not in synch. You will also notice that, in the exemple above, the performances for these two slaves are not homogeneous. That’s normal, don’t forget I’m currently working on a virtualized environment built on a workstation and this tutorial is limited to demontrating how to create and configure a PDI cluster. No optimization was taken in account at that time. On a fully optimized cluster, you will have (almost) homogeneous performance.

Running with the linux Console

If you want to execute your job from the linux command line, no problem. Kitchen is here for you. Here is the syntax for a job execution. Note : VMWARE-SLES10-32_Repo is my PDI repository running on the Master. I’m sure you are already familiar with the other parameters.

image

For executing your transformation, use pan. Here is the typical command.

image
Conclusion and … what’s next ?

Well, I hope you found here some explanations and solutions for creating basic PDI clustering. You can create more than 2 slaves is you want, the process is the same. Don’t forget to add these new slaves in the cluster definition in Spoon. As I said, no particular attention was given on optimization. This will be the topic for a next post in the near future. Feel free to contact me if you need further explanations about this post or if you want to add some usefull comments, I will answer with pleasure.

Next post will be about creating the same architecture, with … let’s say 3 or 4 slaves, in the Amazon Cloud Computing infrastructure. It will be a good time to speak about could computing in general (pros, cons, architecture …).

How to quit “JPivot has been replaced by Pentaho Analyzer…” message in Pentaho BI Server CE 4.5 or 4.8


In this quick post I will show the way to quit  “JPivot has been replaced by Pentaho Analyzer…” message in Pentaho BI Server CE 4.5 or 4.8 and it is also useful for Pentaho BI Server CE version 3.10.0 stable.

The annoying message is the following

JPivot has been replaced by Pentaho Analyzer.
It is provided as a convenience but will no longer be enhanced or offically supported by Pentaho.

It appears every time you open Jpivot client.

1) Open Pivot.jsp file at  biserver-ce-4.5.0-stable/biserver-ce/tomcat/webapps/pentaho/jsp folder

And search the following text

      JPivot has been replaced by Pentaho Analyzer.
        It is provided as a convenience but will no longer be enhanced or offically supported by Pentaho.

This text is contained in the following div

 <div id="deprecatedWarning" style="margin: auto; width: 100%">
 <table width="580px" align="center" style="background-color: #fffdd5; border-style: solid; border-color: #dcb114; border-width= 1px; font: normal .85em Tahoma, 'Trebuchet MS', Arial">
 <tr>
  <td>
  <img src="./jpivot/navi/warning.png"/>
 </td>
  <td>
 JPivot has been replaced by Pentaho Analyzer.<br/>
 It is provided as a convenience but will no longer be enhanced or offically supported by Pentaho.
  </td>
  </tr>
  </table>
  </div>
 

Just comment this div or delete it and the problem will be solved ;-).

Hope you enjoy!

Infobright – Column Oriented, Open-Source Analytic Database


Interesting Article about BigData and Column oriented engines written by a student…

… Students in major of software engineering are required to take another course named “Data Storage & Information Retrieval Systems” as a prerequisite for Database. DS&IRS mainly focuses on optimized storage and retrieval of data on peripheral storages like a HDD or even Tape! (I did one of my most sophisticated and joyful projects during this course. We had to implement a search engine, which compared to the boolean model of Google, is supposed to be more accurate. More information concerning this project could be found on my older posts). During these courses, student are required to be engaged in specific projects, defined to help students gain a better perspective and intuition of the problem and issue.

I don’t know about other universities, but in ours, seeing students performances on such projects is such a disappointment. While doing such fun projects as part of your course to learn more, is quite an opportunity, students beg to differ. The whole atmosphere is believing that our professors are torturing us, and we should resist doing any homework or projects! You have no idea how hard it is to manage escaping that dogma, as you have to live among such students. It is unfortunate how most of the students are reluctant to any level of studying. For such students, learning only occurs when they’re faced with a real problem or task.

 

So here’s the problem. You are supposed to do your internship at a data analysis company. You will be given 100 GBs of data, consisting of 500 millions of records or observations. How would you manage to use that amount of data? If you recall from DS&IR course, you’d know that a single iteration through all the records would take at least 30 minutes, assuming all of your devices are average consumer level. Now imagine you have a typical machine learning optimization problem (a simple unimodal function), that may require at least 100 iterations to converge. Roughly estimated, you’d need at least 50 hours to optimize your function! So what would you do?

 

That kind of problem has nothing to do with your method of storage, a simple contagious block of data which minimizes seek time on the hard disk, and reading the data in a sequential manner is as best as you can get. Such problems are tackled by using an optimization solution which minimizes access to hard disks and finds a descent optimal solution.

 

Now imagine you could reduce the amount of data you’d need on each iteration, by selecting records with a specific feature. What would you do? The former problem doesn’t even need a database to perform its job. But now that you need to select some records with a specific attribute (Not necessarily a specific value), you shouldn’t just iterate through the data and test every record against your criteria. You need to manage the data on disk, and create a wise index of the data, which would help you to reduce disk access and answer your problem perfectly (or even close enough). That’s when databases come in handy.

 

Now the question is, what kind of database should I use? I’m a Macintosh user, with limited ram, a limited and slow hard disk, with a simple processor! Is using Oracle the right choice? The answer is no, you have a specific need and these general purpose databases may not be the logical choice, not to mention the price of such applications. So what kind of service do we require? In a general manner, users may need to update the records, or alter the table’s schema and … . To provide such services, databases need to sacrifice speed, memory and even the processor. Long story short,  I found an alternative open source database which was perfect for my need.

 

The infobright, is an open-source database which is claimed to “provide both speed and efficiency.  Coupled with 10:1 average compression”. According to their website the main features (for my use) are:

 

  • Ideal for data volumes up to 50TB
  • Market-leading data compression (from 10:1 to over 40:1), which drastically reduces I/O (improving query performance) and results in significantly less storage than alternative solutions.
  • Query and load performance remains constant as the size of the database grows.
  • Runs on low cost, off-the-shelf hardware.

 

Even though they don’t offer a native Mac solution, They have a virtual machine running Ubuntu, prepared to use the infobright with. And here’s the best part, even though the virtual machine allocations were pretty low (650 MBs of ram, 1 cpu core), it was actually able to answer my queries in about a second! The same query on a server (Quad Core, 16GBs of ram, running MS SQL Server) took the same amount of time. My query was a simple select, but according to the documents, this is highly optimized for business intelligence and data warehousing queries. I only imported 9 millions of records, and it only consumed 70MBs of my hard disk! Amazing, isn’t it? Having all the 500 millions of data imported would only take 3.5 GBs of my disk!!

 

The infobright, is mainly an optimized version of MySql server, with an engine called brighthouse. Since its interface is SQL, you can easily use Weka or Matlab to fetch the necessary data from your database and integrate it into your learning process, with minimum amount of code.

 

Is Your Big Data Hot or Not? by Farnaz Erfan


Data is the most strategic asset for any business. However, massive volumes and variety of data has made catching it at the right time and right place, discovering what’s hot – and needs more attention – and what’s not, a bit trickier these days.

 Heat grids are ideal for seeing a range of values in data as they provide a gradient scale, showing a change in data intensity through the use of colors. For example, you can see what’s hot in red and what’s normal in green; and everything else in various shades of color in between. Let me give you two examples of how companies have used heat grids to see if their data is hot or not:

Example #1 – A retailer is looking at week-by-week sales of a new fashion line to understand how each product line is performing as items get continually discounted throughout the season. Data is gathered from thousands of stores across the country and then entered into a heat grid graph that includes:

  • X axis – week 1 through 12, beginning from the launch of a new campaign (e.g. Nordstrom’s Summer Looks)
  • Y axis – product line (e.g. shoes, dresses, skirts, tops, accessories)
  • Color of the squares – % of discount (e.g. dark red = 70%, red = 60%, orange = 50%, yellow = 30%, green = full price)
  • Size of the squares – # of units sold

Looking at this graph, the retailer can easily see that most shoes sell at the beginning of the season – even without heavy discounts. This helps the retailer predict inventory levels to keep up with the demand for shoes.

It also shows that accessories almost never sell at regular prices, nor do they sell well when the discount levels are higher than 70%. Knowing this, the retailer can control its capital spending by not overstocking on this item. The retailer can also increase profit per square footage of their store by reselling its accessories earlier in the season to avoid high markdowns and inventory overstocks at the end of the season.

Example # 2 – A digital music streaming service provider is using analytics to assess the performance of its sales channels (direct vs. sales through different social media sites such as Facebook and Twitter) to guide future marketing and development spend. For that, the company uses a heat grid to map out:

  • X axis – various devices (iPhone, iPad, Android Smartphone, Android Tablet, Blackberry)
  • Y axis – various channels (direct site, Facebook, Twitter, …)
  • Color of the circles – # of downloads (0-100 = red, 100-1000=orange, 1000-10000 = yellow, 10000+ = green)
  • Size of the circles – app usage hours per day – the bigger the size, the more usage

This graph helps the music service provider analyze data from millions of records to quickly understand the popularity and usage patterns of their application on different devices, sold through different channels.

Heat grids can be use in variety of other forms, such as survey scales, product rating analysis, customer satisfaction studies, risk analysis and more. Are you are ready to find out whether your big data is hot or not? Check out this 3 minute video to learn how heat grids can help you.

Understanding buyers/users and their behavior is helping many companies including ideeli – one of the most popular online retailers – and Travian Games – top German MMO (massively multiplayer online) game publisher – gain better insight from their hottest asset – their big data!

What is your hottest business asset?

– Farnaz Erfan, Product Marketing, Pentaho

This post originally appeared on Smart Data Collective on July 13,2012

Creating Pentaho Reports from MongoDB


So you’ve made the move and started using MongoDB to store unstructured data.  Now your users want to create reports on the MongoDB databases and collections.  One approach is to use a Kettle transformation that retrieves data from MongoDB for reports.  This approach is documented on the Pentaho Wiki.  However, I want to use the MongoDB database directly without dealing with Spoon and Kettle transformations.  Fortunately Pentaho Reporting also supports scripting with Groovy built in.  This tutorial will show you how to create a report against MongoDB data using the Javadrivers and Groovy scripting.

Pre-Conditions
You should already have mongodb installed and accessible.  I’m running on the same machine with the default settings, so vary the code as needed for your configuration.
You also need to put the mongo-java-driver-2.7.2.jar file in the libraries for Report Designer and the BA Server
$PENTAHO_HOME/design-tools/Pentaho Report Designer.app/lib/
$PENTAHO_HOME/server/biserver-ee/tomcat/webapps/pentaho/WEB-INF/lib
Restart the app and BA Server if they are running to pick up the new .jar files.
Setting Up
The first thing you need is some data.  I’ve created an input file of sales by region and year to use as an example.  Download and import the data using the mongoimport command:
> mongoimport -d pentaho -c sales data.json
Verify that the data has been successfully imported by opening the mongo shell and using the following commands:
> use pentaho
> db.sales.find();
You should see a list of documents that were added.
Creating the Report
  1. Using Pentaho Report Designer, create a new report.
  2. Add a data source and choose Advanced -> Scriptable
  3. Select groovy as the language and click the (+) for a new query
  4. Enter the following code as the script (check Server Address on database connection creation)

import com.mongodb.*

import org.pentaho.reporting.engine.classic.core.util.TypedTableModel;
def mongo = new Mongo("127.0.0.1", 27017)
def db = mongo.getDB("pentaho")
def sales = db.getCollection("sales")
def columnNames = new String[6];
columnNames[0] = "Region";
columnNames[1] = "Year";
columnNames[2] = "Q1";
columnNames[3] = "Q2";
columnNames[4] = "Q3";
columnNames[5] = "Q4";
Class[] columnTypes = new Class[6];
columnTypes[0] = String.class;
columnTypes[1] = Integer.class;
columnTypes[2] = Integer.class;
columnTypes[3] = Integer.class;
columnTypes[4] = Integer.class;
columnTypes[5] = Integer.class;
TypedTableModel model = new TypedTableModel(columnNames, columnTypes);
model.addRow([ new String("East"), new Integer(10), new Integer(10), new Integer(14), new Integer(21) ] as Object[]);
def docs = sales.find()
while (docs.hasNext()) {
  def doc = docs.next()
  model.addRow([ doc.get("region"), doc.get("year"), doc.get("q1"), doc.get("q2"), doc.get("q3"), doc.get("q4") ] as Object[]);
}
docs.close()
model;
This will read the data from MongoDB and return the table model needed by the reporting engine.
From here it’s just standard report generation and publishing, which is described in the Pentaho documentation.

via Creating Pentaho Reports from MongoDB.

Thanks to BillBack  @billbackbi

New Ctools releases 12.07.19 (Lots of them!) by Pedro Alves


Article By Pedro Alves blogspot
New Ctools releases 12.07.19 (Lots of them!)
Lots of new releases, including some new ones!

CDE Release 12.07.19

Major changes:

Allow plugins to register extra cde components
CPF usage as component basis
New Popup Component and Reference
Added new component based on textarea html object.

Full Changelog:

Implemented [REDMINE 281] – Allow plugins to register extra cde components
Support for output index in external CDAs
CPF usage as component basis
New Popup Component and Reference
Added new component based on textarea html object. Added properties of max length and size to text input component edition
Fixed [REDMINE 856] – CDE allows non-privileged users access to edit dashboards.
Fixed [REDMINE 721] – SelectMulti Isn’t populated, feeded by another SelectMulti Component
Fixed [REDMINE 424] – CDA creation sometimes adds a prefix / to catalog that disables Mondrian role mapping
Added start and end Date parameters to DatePicker Component, which can be controlled by the dateFormat parameter.
FileExplorer returns different info from getExtension in fileBased and db rep. Adding both
Allow Icons (.ico) to be downloaded
[PATCH] BulletChart support for the tooltipFormat option
disable edit option in olap wizard file open; prompts a bit less green
SiteMap Component update

CDF Release 12.07.19

Major changes:

Add clippedText AddIn
add column headers support to group headers addIn
Stacked bars with line: plot panel clipping was activated due to forcing orthoFixedMin to 0 and resulted in cut dots

Full changelog:

[Redmine-207] – tableComponent oLanguage fix
Turned pvc.defaultColorScheme public (for readonly access)
[FEAT] Added a hook for plugging in a default chart colorscheme (CvK Feb. 2012)
Add clippedText AddIn
add column headers support to group headers addIn
Added support in fireChange for components to be able to update without triggering increment/decrementRunningCalls.
Changed the behaviour of the clickable “span” for removal. Before, we needed to click on the “x” to get the selection out, now you just need to click on the span of that selection
output-index for external cda’s
jqueryui: use ctrlKey instead of metaKey for multiple selection
set datepicker defaults only if it exists
CDF tutorial needs that both zip and xml files be downloadable by CDF
Patch to fix HeatGrid performance problem.
Added queryState to query component.
MapComponent fix for FF13 – FooComponent as first object in classNames[]
[PATCH] Stacked bars with line: plot panel clipping was activated due to forcing orthoFixedMin to 0 and resulted in cut dots
Added underscore, backbone and mustache
circle addin border width was not working on Chrome
[REDMINE 721, REDMINE 771] selectMultiComponent null-related issues; impromptu css less green
Added override to input[type=text] margin and dataTables min-height
[PATCH] BulletChart support for the tooltipFormat option.
Added TextareaInputComponent
Added erichynds Multiselect JQuery Plugin Files
Fix the Clean template, which wasn’t actually as clean as all that.
changed table click handler to use rawData instead of query.lastResult so we can access postFetch changes
[REDMINE 206] – Dates are always used in iso format on parameters regardless of display date format

CDA Release 12.07.19

Major upgrades:

Now using CPF (Community Plugin Framework)
Integration with CDV

Full changelog:

Fixed [REDMINE CDA797] – ClassLoader issue using CdaQueryComponent in FusionContentGenerator
Fixed [REDMINE CDA787] – use cde ext-editor for cda files
Introduced CPF As plugin base
New Robochef Version – Memory optimization
CDV Integration
print stack trace for hazelcast serialization errors, +MdxDataAccess.ExtraCacheKey.toString()
path issue in cda editor
-Util.isNullOrEmpty, using StringUtils; build was changed to use 4.5 2 commits ago
pathParams->requestParams
cde-editor corrections
Use cde as editor if available; +RepositoryAccess and some cleaning

CDB Release 12.07.19

First release!

CDC Release 12.07.19

First release!

CDV Release 12.07.19

First release!