Quantcast
Channel: SCN : Blog List - SAP BusinessObjects Lumira
Viewing all 855 articles
Browse latest View live

Understanding Wealth in Canada

$
0
0

Always hearing about the mounting household debt, I wanted to better understand the financial health of Canadians for myself. Using data collected from Statistics Canada, I was able to gather enough information to create a story and generate powerful visualizations using SAP Lumira. Click below to view my #dataviz story.

 

Infographic.png

For more information on household debt in Canada or to watch a video explaining this data in further detail, please visit this Business News Network link.


TOP 10 ATP Tennis Players in 2014

$
0
0

Dear all (especially tennis fans ),


I adore tennis and I really like SAP, so I decided to analyze the best ATP players in 2014 with the help of a great tool: SAP Lumira. I used data (1.12.2014) from  ATP World Tour Site: http://www.atpworldtour.com/Matchfacts/Matchfacts-List.aspx and I created my own dataset.


Rafael Nadal (yes, I'm a big fan) and Roger Federer once said:


I've stayed calm when I'm winning and I've stayed calm when I've lost. Tennis is a sport where we have a lot of tournaments every week, so you can't celebrate a lot when you have big victories, and you cannot get too down when you're losing, as in a few days you'll be in the next tournament and you'll have to be ready with that.” – Rafael Nadal


You always want to win. That is why you play tennis, because you love the sport and try to be the best you can at it.” – Roger Federer


Of course I agree with them, but I must add something: “The right answer is: Analytics. It can help you get ready and it can help you to be the best as you can be.”


I started with the current rank of best ATP player and with the highest rank in their career. We can see that at one point in their career they all had higher ranking status, except of Novak Djokovic of course, as he is current Nr.1 and Kei Nishikori (current Nr. 5).


current vs highest rank.jpg


We can also see the player with the most points and with the most titles below:


titles.jpg





I read my horoscope  everyday, so now I can read their too .  We can see that two of them are Gemini (Novak Djokovic who also has the most points in 2014 and Rafael Nadal who is the third player that has the most points). So, If you are a Gemini I can give you a kind advice: start playing tennis


astrological sign.jpg

Being smaller person, I decided to analyze their height and points (in 2014) to see if there is any connection. We can see that the best three players (with the most points) are almost the same height.


height.jpg


I also compared environment that they play on: clay, grass and hard environment (how many times they wond and lost on specific environment).


Grass Environment.jpg

clay.jpg
.

 

 

 

 

hard.jpg

I also wanted to analyze their game and see their first and second serve point, serve return point and their aces.

 

 

1st and 2ns serve points.jpg

serve return points.jpg

 

 

 

aces.jpg

 

And we can also see all tournaments in 2014:

 

Tournaments by Country.jpg

 

Enjoy ,

 

Breda

Lumira and the Superheroes

$
0
0

Dear data aficionados,


For my very first steps with SAP Lumira, I would like to present a short analysis of the effect of gender on Superheroes... Let's have a look..!!


page1.PNG


Well, well, well....When you talk to a superhero you have more chances to say "Hello Mr." than "Hello Ms." and the distribution of power is almost unique per superhero although superladies receives superstrength and superspeed more often !


page2.PNG



Men are taller and are the only ones to be full time superhero ... However fortune is evenly distributed with an equivalent number of millionaires on each side !

page3.PNG

Everyone seems to live in New York or Gotham cities...


Cheers,


DataGeek III Challenge: Bicycle Trend Analysis in Vancouver | By: Ren Horikiri & Trevor Duong

$
0
0

Preface:

 

Vancouver is targeting to become the greenest city in the world by 2020 so naturally we wanted to determine how that goal is progressing in one of the most visible elements, traffic.  It is widely known that cars are a major source of pollution in the world, whereas the bicycle generates no pollution (except maybe a sweaty shirt or two).

 

As a recently converted bicyclist, I wanted to know if the bicycle usage year over year is increasing significantly or staying relatively flat.  Another question of mine is whether weather has any effect on the usage of bicycles across the year because I myself do not bike in weather less than ~10 degrees Celsius.

Staring at an endless table full of data is not the best way to determine trends, thus we turn to Lumira to solve the questions asked above.

 

 

Finding the Right Data:

 

The most critical element of a successful Lumira analysis is having a reliable source of good, clean data to work with.  Firstly, we needed a statistic to measure the trends of bicycle usage and the best source for this was the City of Vancouver’s statistics on separated bike lanes found hereThere are four bike lanes in total which the City of Vancouver installed devices to count the number of bicycle users and this would prove to be almost perfect for our analysis.

 

The second important set of data is the weather data, more specifically the temperature averages for each month in Vancouver for the past three years.  Through a quick Google search, we were able to uncover accurate temperature data provided by Environment Canada found here.  Temperature was chosen since there is an uptrend and downtrend through the year, whereas measurements such as rain are more sporadic and harder to capture accurately.

 

 

 

 

Preparing the Data:

One of the more tedious tasks of analyzing the data was cleaning the data to a point where it could be pulled into Lumira.  In regards to the bike lane data, we needed to convert the PDF file provided by the City of Vancouver, into XLSX using Adobe Acrobat.  The next step was to clean the date by deleting extra text, converting the month field to numerical month of the year (1,2,3… so that it would show in the right order), and ensured that each bicycle lane count was mapped to the correct month.

 

PDF file provided by the City of Vancouver on bike lane usage:chart raw data.png

 

Cleaned bike lane usage data and average temperature:

chart cleaned data.png

 

 

 

Determine the Best Chart to Use:

Since there were multiple bike lanes, we wanted to show each bike lane as a distinct object so we could compare the trends in each.  The ability to try different chart types with just a click of the mouse was very rewarding, as we could see and compare which chart gave us the best view.  In the end, we agreed that the line chart was the easiest to compare bike lane usage by month.  To incorporate temperature data, we chose a "Line Chart with 2 Y-axes" to plot the bicycle lane usage counts and average temperatures.

 

2 Y-Axes showing bike lane usage and average temperature:

chart weather and bike.png

 

 

Challenges:

The need to rely on Excel was higher than expected so our hope in the future would be that Lumira could take a larger role in cleansing and organizing the data prior to analysis.

 

Another challenge was with the Line Chart with 2 Y-Axes, the line colors for the bike lane usage counts no longer showed as distinct but instead it was shades of blue.

 

1 Y-Axes showing only bike lane usage (with distinct colors):

chart bike only.png

 

Next Steps:

First challenge is to gather Vancouver SAP employee data on how many people use the bike room per month over a year period and then compare it to the Vancouver biking community.


The second is to challenge the City of Vancouver to take multiple bike usage counts along each of the bike lanes. With this data, the physical bike lane can be mashed up with the local map and overlay the bike lane usage per month onto it.

 

In either challenge we expect Lumira visualization to help convey the message that biking is good for you.

Data Geek III - Analysis of Delays on Chicago Buses

$
0
0

This blog post is my third and final part of my entry for the 2014 Data Geek Challenge.

Step 1: SAP Lumira Extension: Google Maps

Step 2: Lumira Dataset: Bus Tracker from the Chicago Transit Authority

 

As described in my previous blog post about the Bus Tracked Dataset (step 2), the City of Chicago in general and the Chicago Transit Authority in particular have made major efforts to leverage IT and deliver a better experience for their customers. In addition, these data are available for developers like us. Let's analyze bus delays in the city of Chicago (the final infographic is available as an attachment).

 

100% of buses are on-time!

 

I have collected information from the position and status (delayed / on time) of buses in Chicago over the month of October 2014 (see step 2). My data collection system wasn't really reliable, so the data is pretty spotty. To sum it up, out of 4,261 checks, only 36 buses were delayed. Believe it or not, this represents only 0.85, which means that on average 100% of the buses were on time!

 

Image_01.jpg

 

Which routes? What day?

 

Drilling-down into the available information, we need to figure out if these were exceptions or if any outlier could be found. If you select all the delayed vehicules, group them by route and visualize them in a bar chart, you'll notice that there are no real outliers. This would have potentially helped identify a troublesome intersection or district, but it's not the case here.

 

However, if you reproduce the same analysis by day, the result clearly identifies Sunday as a sour spot. Looking a little closer, we can see that only 3 out of 7 days are showing. In this particular instance, I believe the reason is that the source data is inconsistent. Since I worked mostly on Sundays on the data source, most data were collected on this day, which doesn't mean most delays were on week-ends.

Image_04.jpg

 

Where?

 

I leveraged the Google Maps Custom Extension detailed in step 1 to identify the location of incidents. There's an old saying in the windy city: "There are only 2 seasons: snow and construction". Therefore, I searched for pockets of delays that could be explained by local events. Again, as you can see here, there are no clear clusters of delays in the data I collected.

Image_05.jpg

Why?

 

Since I couldn't find a clear pattern either in the routes, or in the days, or in the location of the delays, I tried to perform a text analysis using the service bulletins. There is no measurable correlation between delays in bus transit and the events described in the bulletins, but we can easily see here that "Reroute" and "Bus Stop Relocation" are the events with most impact.

 

Image_07.jpg

 

Conclusion and Learning Points

 

Overall, this data visualization exercise didn't reveal unexpected insights, except for the fact that buses in Chicago are almost always on time. That's a pretty good news.

 

On the other hand, I believe the idea of the Data Geek Challenge is less in the insights than in the learning process, which was my biggest achievement this year. Here are some of my learning points:

  • In every data analytic projects, data is king. In this particular case, I was able to identify holes when drilling by day, but was limited by the data quality,
  • The best feature of Lumira in this situation is the ability to simply update the data and refresh the analysis. I will improve my data collection system and rerun the analysis next month and maybe I will be able to identify something else,
  • Custom visualizations are very powerful on the desktop, but cannot be shared with the Lumira Cloud. They can however be shared within an organization by leveraging the Lumira Server,
  • It was my first time using pictograms on the Compose section of Lumira. I was surprised to discover it needs SVG files. You can either download data sets like SVG Map Icons or create your own with a good old notepad (like I did) or use a specialized graphics tool like Inkscape
  • I am by no means a designer, so I used a great trick I learned from Mico Yuk 's webinars: check some design websites and find inspiration or buy a theme or template. My favorite is GraphicRiver
  • The Lumira Team is amazing and was always there to help me when I ran into some bumps, especially with the custom extension.

 

I would encourage everybody to give the next Data Geek Challenge a try. It's a great way to have fun while learning.

DataGeek III: Thrones of Data - Volleyball Women's World Championship Italy 2014

$
0
0

Titans.png

From September 23, 2014 to October 12, 2014 Italy hosted a great sport event: the 17th edition of the FIVB Volleyball Women’s World Championships. The competition registered a great success, thanks to the technical level and to the enthusiasm aroused by the italian team, which palyed a wonderful competion and reached semifinals. The champioship title was won, for the first time in its history, by USA, which defeated China in a very exciting final.


With the help of SAP Lumira, I want to show you some interesting statistics.


Data set

 

I gathered my data from the web-site Statistics - FIVB Volleyball Women's World Championship Italy 2014. In a very easy and intuitive way I created my data-set copying tables from the website mentioned above and using the option "Copy from Clipboard".

 

Analysis

 

History

 

First of all, a quick glance to the history of the competion. With the "Compose" step, I merged the chart build from the gold medals showcase, into a slide with comments and notes.

 

History.PNG

 

Best players

 

The best volleyball players of the worlds showed during the three weeks of the competition their skills. Some of them, such as the young italian spiker Valentina Diouf, made their best ever perfomance.

 

The following buble-chart shows the rank of the best scorers.

 

The X axis shows the rank;

The Y axis shows the total number of point made (given by the sum of spikes, blocks and serves)

The buble width show the number of spikes.

 

Rank.png


Valentina Diouf is the second top scorer, after the chinese Ting Zhu and before the MVP of the tournament, Kimberly Hill.


The following picture shows the same buble-chart limted to the top 15 of the best scorers. With the "Compose" function I created an infographing and added a picture of the MVP Kimberly Hill celebrating a point with her teammates.


Rank_2.png


Final standing


USA won the title, silver medal to China, third place for Brazil, which defeated Italy in the final for the bronze medal.


I created a Geo bubble chart to show in an unusual way the final standing of the championship. With the animation feature, the result is very nice.


1.PNG


2.PNG

 

3.PNG

4.PNG

Understanding Wealth in Canada

$
0
0

Always hearing about the mounting household debt, I wanted to better understand the financial health of Canadians for myself. Using data collected from Statistics Canada, I was able to gather enough information to create a story and generate powerful visualizations using SAP Lumira. Click below to view my #dataviz story.

 

Infographic.png

For more information on household debt in Canada or to watch a video explaining this data in further detail, please visit this Business News Network link.

Download data to CSV file from SAP Lumira visualization and storyboards

$
0
0

Dear SCN Community,

 

I have been working in SAP Lumira since 5 months and my experience with this tool is amazing. But one thing I noticed is, at present I don’t see an option wherein we can download data back to either “.csv” or “.excel” format from Lumira desktop.

 

I believe this is feature is quite useful and most of us love the data enriching and saving the data in our local files for future purpose.

I thought of creating this feature as a custom extension in SAP Lumira.

 

Here it is:-


Step 1

The below screen has button style custom extension which I have created.  I have selected few dimension and measures.

 

01.jpg

Step 2

I clicked on the button “Export to CSV”. It popup a window asking file saved in which location.

 

Here i gave filename as “dataexport” and location at desktop.

02.jpg

Step 3:

Here I am opening the saved "dataexport.csv" to see the contents

04.jpg

 

Step 4:

I have used the same extension in story board as well to see filtering data is exported correctly.

 

05.jpg

 

Step 5:

I have used input control to filter the state to “Andhra Pradesh” and exported the data by clicking export data button.

 

File Name Given: dataexport-andhrapradesh.csv

Location: Desktop

 

06.jpg

 

Step 6:

Verified the contents of the file "dataexport-andrapradesh.csv". Yes only “Andhra Pradesh” state data is exported.

07.jpg

This is very small feature and I believe this very much useful. Hope you guys like this.

 

With regards,

Hari Srinivasa Reddy, Manager SAP Analytics

NTT DATA


Analyze unstructured data in SAP Lumira with Text Analysis

$
0
0

We all know and agree that data size is increasing at a tremendous pace.

With Social media like Twitter and Facebook, organizations are faced with a new question... How can this unstructured data be used to its potential?

 

In this blog, we will look at how to consume and make sense of unstructured data using HANA and SAP Lumira.


Following are the various things we will look at-

 

 

1. Importing raw data into HANA

2. Using Text Analysis to redefine the unstructured data

3. Create Analytical Model to consume the data in SAP Lumira

4. Create a LUMIRA report and get the insight.


1. Importing raw data into HANA

There are various ways to import data into HANA, like SLT, SAP Data Services or plain CSV upload.


In this example, we have downloaded the CSV files(FLAT_CMPL.zip) containing the COMPLAINTS received by NHTSA (an organization which maintains data regarding Complaints, Defects & Recalls of the Cars in USA).

Office of Defects Investigation (ODI), Flat File Downloads | Safercar.gov | NHTSA


Upload the CSV file directly to HANA and create a new Table with Fields and mapping suggested by HANA.

Modify the Field Name in the Target Table.

Mapping.jpg

 

 

The Table contains details about the Type of Car, Make, Model, Year and "Description" with lot of Text Data about the complaint.

Description_Column.jpg

 

2. Using Text Analysis to redefine the unstructured data

 


At this point, the data is not 'complete' for analysis. Using Text Analysis by means of a simple SQL execution, this data can be structured.

FULLTEXT INDEX GOUTAM.TEXT_CORE ON"GOUTAM"."COMPLAINT"("Description") CONFIGURATION 'EXTRACTION_CORE'TEXT ANALYSIS ON;


After the Text Analysis process is complete(could be checked in M_FULLTEXT_QUEUE) the data  is extracted based on Type(TA_TYPE) and also will include the no of occurrence(TA_COUNTER) into a new TA Table "$TA_TEXT_CORE".


Below is the snapshot as to how the TA process categorizes the extracted data. Since we are interested in only the parts of the car, we will use NOUN_GROUP as filter.

TA_TYPE.jpg

TA_TEXT_CORE.jpg

 

3. Create Analytical Model to consume the data in SAP Lumira

 

 

Let's combine the original table "COMPLAINT" with the new TA table "$TA_TEXT_CORE" in an Attribute View. This view will eventually be consumed in Analytic View.


Define the Output Columns and Activate the Attribute View.

Attribute View.jpg


As we are only interested to see the complaints related to Car part or Incident, we will restrict the Measure TA_COUNTER to only records of TA_TYPE= NOUN_GROUP. This could be achieved by creating a variable on TA_TYPE Column with value NOUN_GROUP"

 

Variable.jpg

 

The Analytic View is now available and can be consumed in any front end Tool supported by HANA.

We will be using SAP Lumira Desktop version.

Analytic View.jpg

 

4. Create a LUMIRA report and get the insight.

 

Create a new Lumira Document and choose the SAP HANA for the Dataset. In the next screen select the Analytic View COMPLAINT_ANALYSIS.

Define the Measure and dimension for report.


Since we have included a Variable, it will ask you to choose the value. Choose the default "NOUN_GROUP"

 

HANA_SOURCE.jpg


HANA_VIEW_STRUCTURE.jpg

 

Create a basic Column Chart with Counter in Y Axis and Make, Model and TA_TOKEN in X Axis.

Here we can see the Components and how they are related to the Make of the Vehicle. For e.g. FORD has 12 Air Bag related issue.

This information was initially hidden in the Text, but no more so .

 

We can scroll across the various lines to see the Component and the Model involved in the Complaint.

However our dataset is large, and it will get more interesting once we start applying filters like Date Range for Model Year of the Car,

filter values across states/cities or Components.

 

Screenshot 2014-12-08 15.22.17.png

 

If we further Filter the data only related to Air Bag related issue, the report will look something like this.

Counter here is nothing but the number of occurrence of Air Bag issue for various Make/Model.

 

AIR_BAG_GRAPH.jpg

 

Of course Lumira offers much more, and for e.g we could create a Geographical Hierarchy on the City Dimension and we could see the Complaint, Component and Model across the States of US. Compose a Story board or Infographics and share across seamlessly.

In the Graph below, it is filtered to Air Bag issue.

AIR_BAG_GEO.jpg


To conclude, we saw how easy it is to consume text/unstructured data in SAP HANA, with SQL based integration of Text Analysis Process and define the models in Graphical modeling environment i.e. HANA Modeler. And we saw how seamlessly HANA is integrated with SAP Lumira to consume the models.

Data Geek III: GigaCon Big Data survey results

$
0
0

This year I discussed SAP HANA and In-Memory Computing at GigaCon Big Data 2014 conference in Warsaw. Here are slides from another event, but with the same content:SAP HANA - Big Data and Fast Data
http://image.slidesharecdn.com/sapbigdataandfastdata-vr140127bext-140127130923-phpapp02/95/sap-hana-big-data-and-fast-data-5-638.jpg?cb=1408630880


I asked participants to complete a very short survey on the state of use of Big Data in their organizations, and I promised to share results back to all participants. This is were SAP Lumira came handy: Desktop version to analyze and visualize results of the survey, and Cloud version to share the final story publicly.

 

Here you can check the results by yourself as well via the same public story at Lumira Cloud:

https://cloud.saplumira.com/open?key=54859214603026C4E10000000A4E4243&type=HANALYTIC

GigaConStoryLumiraCloud.jpg

DataGeek III: Thrones of Data Finalists Announced! Vote Now!

$
0
0

Finalists-01.png



DATAGEEK III: THRONES OF DATA FINALISTS - VOTE BELOW

 

After weeks of hard-fought battles against the Dark Data Walkers, the Land of Datarous has emerged victorious!

 

Using SAP Lumira, brave DataGeeks from across the land in the House of Dragons - Business, House of Titans - Sports & Entertainment, House of Blacksmiths - Technology, House of Healers - Health & Science and House of Spirits - Social Good banded together to defend the Throne of Lumira. By uncovering value from data and discovering new meaning through data visualization technologies, DataGeeks demonstrated how we can turn data into meaningful insight that allows us to take action that ultimately leads to change, however big or small.

 

Now that the Dark Data Walkers are beginning their retreat, we must honour the Data Lords who valiantly led their Houses to victory.

 

Voting poll is open from Monday, December 8th to Thursday, December 18th 23:59 PST. The Ultimate Ruler of the Throne of Lumira will be announced on Friday, December 19th.

 

>> CLICK HERE TO VOTE<<

 

Horizontal Banners (web)-01.png

 

Advanced Data Lord: Kishor Patil

Entry: Leading Automobile Manufacturing Companies and Countries

 

http://scn.sap.com/servlet/JiveServlet/showImage/38-99892-356571/15.jpghttp://scn.sap.com/servlet/JiveServlet/showImage/38-99892-356573/17.jpg

 

Expert Data Lord: Nikhil N Jannu

Entry: MMForum: A Real Time Analysis of Mining in India using SAP Lumira

 

http://scn.sap.com/servlet/JiveServlet/showImage/38-109752-485161/2.PNGhttp://scn.sap.com/servlet/JiveServlet/showImage/38-109752-485119/15.PNG


Horizontal Banners (web)-02.png


Advanced Data Lord: Tamara Hartenthaler

Entry: SAP DataGeek Challenge: House Of Titans - YouTube

 

moviescreencap.pngmoviescreencap2.png

 

Expert Data Lord: Amit Gupta

Entry: DataGeek III Challenge: Formula 1 Statistics

 

http://scn.sap.com/servlet/JiveServlet/showImage/38-116870-588026/F1+Current1.pnghttp://scn.sap.com/servlet/JiveServlet/showImage/38-116870-589280/F1+Current3New.png

 

 

Horizontal Banners (web)-03.png

 


Advanced Data Lord: Leslaw Piwowarski

Entry: Data Geek III - Space Shuttle Missions in years 2020 - 2030 Analytics with SAP Lumira


http://scn.sap.com/servlet/JiveServlet/showImage/38-115622-570646/wykres+7+number+of+crew+cargo+taken+shuttle+.pnghttp://scn.sap.com/servlet/JiveServlet/showImage/38-115622-570657/geographical+cargo+takent+by+country.png


Expert Data Lord: Pranav Nagpal

Entry: Unveiling the concepts of a machined bird and it’s behavior in air – Analyzed using the Power of SAP Lumira

 

http://scn.sap.com/servlet/JiveServlet/showImage/38-117835-601112/Airfoil+shape.PNGhttp://scn.sap.com/servlet/JiveServlet/showImage/38-117835-601132/laod+factor+part+1.PNG

Horizontal Banners (web)-04.png


Advanced Data Lord: Jill Peterson

Entry: A Family Starting Fresh by Jill Peterson on Prezi


prezi1.pngprezi2.png


Expert Data Lord: Andrew Fox

Entry: Integrating SAP Lumira and ESRI mapping to deliver Location Intelligence.

 

http://scn.sap.com/servlet/JiveServlet/showImage/38-110850-500936/snow_map.pnghttp://scn.sap.com/servlet/JiveServlet/showImage/38-110850-500937/snow+-+lumira.JPG

Horizontal Banners (web)-05.png

 


Advanced Data Lord: Arijit Das

Entry: Data Geek III - Analyzing Crimes against Women in India


http://scn.sap.com/servlet/JiveServlet/showImage/38-116744-585978/pastedImage_6.pnghttp://scn.sap.com/servlet/JiveServlet/showImage/38-116744-586089/pastedImage_13.png


Expert Data Lord: Robert Russell

Entry: Data Geek III - Crime and Census Data for England and Wales


http://scn.sap.com/servlet/JiveServlet/showImage/38-117718-599312/countCrimeLAD.pnghttp://scn.sap.com/servlet/JiveServlet/showImage/38-117718-599315/populationWithOutLondon.png

>> CLICK HERE TO VOTE<<

SAP Lumira Visualization Extension - Hello World from SAP Web IDE

$
0
0

I am excited to spread the message that you can now create SAP Lumira visualization extensions with SAP Web IDE! Recently we released our long-awaited VizPacker plugin for SAP Web IDE, so Lumira chart developers can use this powerful, cloud-based IDE to create cool chart extensions for Lumira. In the next few blogs, I plan to show you how you can easily create Lumira visualization extensions with this new tool. I will start with the simplest Hello World example in this blog to go through the end-to-end process.

 

 

Step 1: Sign up for an SAP HANA Cloud Platform Account to Access SAP Web IDE

 

If you haven't done so, follow this great blog postSAP Web IDE - Enablement by Jennifer Cha to gain free access to SAP Web IDE. If you already have access to HANA Cloud Platform (HCP), go directly to the HCP landing page: https://account.hanatrial.ondemand.com/ and click Log On. Note that Chrome browser is recommended for the following steps.

HCP_Landing_Page.png

After logging on to HCP, you should see the following home page. Click on Subscriptions in the Content menu on the left hand side:

HCP_Home_Page.png

On the Details pane on the right, click on webide in the list of your subscribed HTML5 Applications:

HCP_Subscriptions.png

This will take you to the page with the link to SAP Web IDE:

HCP_WebIDE_Link.png

 

Click on the link, and it opens up a new browser window for SAP Web IDE. This is our new exciting development environment for creating Lumira visualization extensions:

Web_IDES_Home.png

 


Step 2: Add VizPacker Plugin to SAP Web IDE

 

The Lumira VizPacker comes to SAP Web IDE as a plug-in, so in order to use it, we have to add it first. Click on Tools -> External Plugins in the main menu, and you will see a list of available external plugins including VizPacker:

Add_VizPacker_Plugin.png

Select the vizpacker plugin, and click OK. Refresh the browser page to apply the changes, and you should now see VizPacker's quick preview button at the top of the right toolbar.

VizPacker_PreviewButton.png

Now you have finished setting up the development environment for VizPacker.

 

 

Step 3: Create a Visualization Extension Project

 

Now we are ready to create our Lumira visualization extension project. Click on File -> New -> Projectfrom Template on the main menu:

Create_New_Project.png

You are now prompted with the new project wizard. Choose Visualization Extension from the list of project template, and click Next.

Choose_Project_Template.png

The wizard goes to the next step to set the project name. In our case, let's set it to HelloWorld and click Next.

Set_Project_Name.png

 

Now we are at the step to configure the visualization extension's profile, including its name, ID, version and optional informaiton such as company, description, etc. If you are a seasoned Lumira visualization extension developer using VizPacker in the past, at this stage the configuration should ring a bell.

Visualization_Extension_Profile.png

Click Next, and you will be brought to the Layout Configuration step. As we are creating a Hello World extension, we will be using the DIV container rather than the default SVG container. Deselect Title and Legend, as we don't need them in this simple extension.

Layout_Configuration.png

Click Next, and we are now taken to the Sample Data page. As our Hello World extension will not be based on any data, we are fine with the default sample data.

Upload_Sample_Data.png

 

Click Next, and we can now setup the measure sets and dimension sets based on the sample data. As we are simply going to output a "Hello World" text in this extenion, we will simply click Next and skip this step. I will go into more details on this in my future blogs.

 

Configure_Data_Structure.png

 

Click Next, and you will hit the confirmation page.

Confirmation_page.png

Click Finish. Now it shows the project folder structure, and we can now see the familiar render function open by default:

Project_File_Structure.png

 

 

Step 4: Implement the HelloWorld Extension

 

Now all we need to do is to add the JavaScript code to output a "Hello World" message. We will do so by appending a <p> element to the container object, which was passed into the render function as an input variable.

 

Remove the //TODO line in the render function, and add the following lines:

container.selectAll("p").remove(); //First remove any existing <p> element
container.append("p").text("Hello World!"); //Append a new <p> element with the text "Hello World"

By the way, you can use Edit -> Beautify -> Beautify JavaScript ( Ctrl + Alt + B ) to format your JavaScript code. This could come in handy when you would like to indent your code properly, especially helpful for lengthy code.


Save the file by pressing the Save button in the toolbar, andclick Refresh in the preview panel. You should now see the "Hello World" message in the preview pane.

Quick_Preview.png

Alternatively, you can preview your extension by selecting preview.html in the project folder structure on the left, and click Run in the toolbar. In this way, the preview will show up in a separate window.

Preview.png

Now we have successfully created our first Lumira visualization extension with SAP Web IDE.



Step 5: Pack the Visualization Extension and Deploy to Lumira

The next step is to package the extension and deploy to Lumira Desktop or Server. To do that, click on the Pack button on the toolbar:

Pack.png

You will be prompted by the success message, and the package can be found in the Downloads section of Chrome browser.

Download.png

If you are going to upload the package to Lumira Server, the package can be used as is. If you would like to try out the package first in Lumira Desktop, follow the instructions below.


Extract the package, and copy the bundles folder to <YourLumiraInstallationDirectory>\Desktop\extensions. Restart Lumria Desktop for the changes to take effect.

 

Create a Lumira document using any dataset (as our Hello World extension does not really depend on any specific data), and choose the newly-deployed Hello World extension. Add a measure to the measure set, and you should see a "Hello World" message in the Visualize room:.

Use_in_Lumira_Desktop.png

 

Now you have created, deployed and consumed your first visualization extension in Lumira Desktop, powered by SAP Web IDE:). Hooray!!

 

In the next few blogs, I will go through some more complex examples to create Lumira visualization extensions with SAP Web IDE. Stay tuned!

What's New in SAP Lumira 1.21 (summary)

$
0
0

Hello Lumira enthusiasts!

 

Let me first wish you all a very nice and warm holiday season. Thank you for the tremendous amount of support and inspirations you have shown throughout the year.  We want to send you home to your family but want to sneak in one more release of Lumira in your treat bag.  So stay tuned for the release of SAP Lumira 1.21 in the next few days!

 

 

In a nutshell, following experiences are what Lumira 1.21 release focused on.

 

1. Intro.PNG

 

 

 

 

Here is a highlight of key enhancements.


2. summary.PNG

 

 

Samples are a great way to get you started on Lumira immediately.  With the number of recent releases and new features, we have included some new and updated samples with Lumira Desktop.  We have also added more online samples for line of business.

 

3. samples.PNG

 

 

Lumira allows you to build powerful stories and infographics using multiple visualizations.  With Lumira 1.21, it is much easier to explore a visualization that you want to focus on from the story view. Just click on the Explore icon to bring up the visualization to the foreground and perform further analysis. If you are not the author of the story and the story was shared to you with view rights, no biggie. It does not stop you from exploring, soring and ranking differently.  If you want to save the story with these changes applied as the default view and you do not have the edit right, simply make your own copy of the story via 'Save as'.  Typical use of this exploration feature is aimed for the viewing mode on Lumira Server and Lumira Cloud. But you can experience the same benefit in the Lumira Desktop Compose room in the preview mode.

 

4. explore.PNG

 

Expanding from the above, you can also drill down or change filters.  Yes, I said it. You can drill down from the story by going into the Explore mode of the visualization.

 

5. drill.PNG

 

We have also updated and added more big data drivers.

 

6. big data.PNG

 

 

With Lumira Desktop 1.21 you can now single sign-on to Microsoft SQL Server 2008.  Please refer to the Lumira Installation Guide as there are some configuration steps required.  With the enablement of Kerberos SSO, we can now easily add SSO to other data sources in the future as incremental updates.

 

7. SSO.PNG

 

 

Lumira 1.21 now supports latest and greatest trusted platforms.  You will note that we have increased the number of HANA revision support.  HANA SP09 support is coming in the next release and we plan to support both SP08 and SP09 at the same time in the future. So some progress made, more on the way!. Please refer to PAM (Product Availability Matrix) for more details. Also please note that there are a couple of changes to installing Lumira Server mainly around the components that Lumira Server deployment depends on. Please consult the Lumira Server installation guide.

 

The other cool enhancement is that there is now much less set up required to govern Lumira Desktop from CMC starting BI 4.1 SP5.  Whereas in the past you needed the full Lumira add-on set up that also required Lumira Server and HANA, for the purpose of simply governing Lumira Desktop use (eg. end user access to data sources, publishing location, auto updates), you only need BI 4.1 SP5 and Lumira Desktop.  This is our first step towards deployment simplification. More simplification is planned in the next releases.

 

 

8. governance.PNG

 

 

If you are not familiar with Lumira Desktop governance capability, please check out this great blog from Greg Wcislo,


As usual there are other small enhancements, usability updates and performance improvements.  The best thing about finishing off 1.21 blog is that I get to play with 1.22. We have another release planned early in the new year, and knowing some of the new capabilities on the way, EXCITEMENT GUARANTEED!

 

Happy Holidays and New Year to you all!

 

Sharon

Data Geek III - Analyzing Accidents Data

$
0
0

This blog is part of DataGeek III under House of Spirits - Caring for social good.For this, the data in our infographic and storyboard is in Odata form published by BigML from Windows Azure Marketplace which is available for free. We have also used other csv data sets to arrive at various results in conjunction with this data set. Following is a brief account of what we have done.

 

We made new features from the existing data set like the month, population, population density to find out how these variables affect fatalities. All the exploratory analyses were done in SAP PA in conjunction with R. We cleaned the data in SAP Predictive analysis.

 

In the past 5 years, more than 30,000 people were victims to road accidents each year in the USA alone. This is a huge loss in terms of lives and billions of dollars. So, we ask some questions to analyze and prevent such man-made disasters and save lives by understanding some underlying relationships.

We used this to answer few questions and find out few interesting patterns regarding accidents that occurred in USA.

 

Questions that we answered:

  1. What are the major factors that influence road accidents?
  2. How much can human behavior like alcohol consumption and drug intake affect driving skills?
  3. Which roads and states have seen more number of accidents?
  4. What can you do to reduce chances of an accident?

 

We used few statistical tests which helped us arrive at various conclusions like...

Chi-squared test:

The Chi-squared test was used to see if there is any significant association between two categorical factors. Such associated factors help in predicting outcomes more accurately. The Chi-square test returns three values (Chi-squared, P-value and degrees of freedom). Chi-square returns a value comparing the frequencies of both the variables occurring together. A high value indicates strong correlation. the P value is a probability that is used to reject null hypothesis. A value lower than 0.05 is generally a strong indicator to reject null hypothesis .

 

The R-CNR Decision Tree:

The model in which every decision is based on the comparison of two numbers within constant time is called simply a decision tree model. Given a set of variables, we can predict the possibility an outcome (like a fatal crash). We have built such a model called the R-CNR Tree in SAP Predictive Analysis taking into consideration factors such as blood alcohol level, type of road, age of driver etc. Hence, we can know beforehand the chances of a crash given a set of variables, which may help prevent an accident.

 

 

Here are few screen shots of our analysis:

 

Few interesting stats that we found about accidents.

1.PNG

More stats..

8.PNG

 

How does it stack across gender?

2.PNG

How does it stack across various cities?

5.PNG

 

Month wise split of accidents..

6.PNG

More facts on month..

7.PNG

 

Who is more likely to face a fatal accident?

3.PNG

Major reasons causing accidents:

4.PNG

Data Geek III - Analyzing Games Of Thrones Data for the GoT Challenge

$
0
0


rsz_gameofthrones_logo.png


Note : I know I am too late for the competition, and unfortunately I have no excuse.

I will publish the blog anyway; in case some tips presented here could be useful for someone….I’ll just try to buy the T-shirt on Ebay afterwards ;-)

 

This blog is meant as a tutorial / demonstration : nothing incredible, but all main steps are described, so that you can reproduce everything (all in Lumira except the last step in Predictive Analysis).


Also I wanted to highlight how easily you can create meaningful analysis with a very simple Dataset.

 

 

After some research, I found a very nice blog by Jordan Schermerhorn, with a free GoT data set, mainly extracted from Wikis. Jordan has already done some very interesting analysis of the data, so the goal was to find new ways to use the existing dataset.

 

Please go to the original blog to have more information:

 

http://jordanschermer.wordpress.com/2014/08/06/valar-morghulis/

 

So here is a sample of the data you can download (the file contains 366 characters): This blog will try to give you all the steps, so if you download the original file, you can use this as a tutorial

 

1.png

(The “affiliation” dimension is a regrouping of characters, similar to an extended family. )

Now let’s start (and if you have not read/seen GoT, beware as this blog may contain Spoilers!)

 

 

Step 1 : data preparation

  • Modification of the default configurations of the Dimensions and Measures.
    select the element on the left side of the screen : the  “object picker”, and go to “display formatting” and “change aggregation” options
  • Creation of a CHARACTER measure to count occurrences (click on CHARACTER and choose “create calculated dimension”)
  • Removal of the unnecessary elements

 

 

Before                                                                                               After

 

 

 

2.png3.png

 

   

  • In order to analyze the character by groups, we already have the “affiliation”.
    But it would be interesting to extract the family name from the character, in order to have a real “family” dimension.

    You just need to create a calculated dimension and enter the magic formula:
    if Contain({Character}, "Frey") then "Frey" else if Contain({Character}, "Stark") then "Stark" else if Contain({Character}, "Lannister") then "Lannister" else if …(you’ve got the idea I think…)…….. else "none"

 

  • In order to classify the characters by age, we can also create Age Groups.
    select the Age column, and then on the right-hand side (the “Data Manipulation”), select “group by selection”
    The screen is very easy to use: sort the “values” column ascending, and just select the values with the mouse (shift+click for a range), select “Add” and give a name to the group. Then proceed with all the remaining groups by selecting “new Group”.
    Very convenient: The “Wiser Adults” is simply creates last with the “Group Remaining Values as” field
    Note: I just made up the Age Groups, please don’t take the range values too seriously….

4.png         

Step 2 : Visualize

I will just highlight some Tips about this part.

  • First let’s compare the Age Group distribution amongst the families. We can create a Stacked Column Chart, and we get this :

 

 

 

5.png

Not bad, but the Age Groups are not sorted in the right sequence…..and I did not find a way to force the display order of the values (Child -> Teen -> Young Adult…)

So I just went back to the Prepare Room, selected the Age Groups, and did a “replace” for each value: Child becomes 1Child, Teen becomes 2Teen….
It took me less than 1 minute. Went back to the Visualize Room, and now: I can see that Starks are indeed much younger than Lannisters 
(SPOILER
:  maybe because they often die young…)



6.png                                                                                                           

  • Then let’s see if the GoT world can compare to the real world: in which age categories do you have more dead characters than living ones.
    you just need to create a simple Line Chart, with the Number of Characters measure, by Age Group, and take Dead as a legend (dead=1 meaning….you’re dead).
    The result seems surprisingly realistic, considering the amount of murders in the books: the older you get, the more chances are that you are dead…

 

 

7.png

 

  • Lastly, let’s try just one algorithm, as I have the SAP Predictive analysis tool.
    I will use the InfiniteInsight Classification algorithm on my data directly (no preparation step). Here are the parameters used :

8.png

And the result is quite interesting : we have seen just before that the Death Rate seemed linked to the Age as expected..... but what we see here is that in fact the main factor for Death is the family, the age being only the second factor !

 

9.png

 

 

Now it would be interesting to go back and analyze this, but I’ll let you continue: your turn !

 

And remember: “Winter Is Coming” !

 

Eric


How Lumira Stacks up Against the Competition Ask SAP Call Notes– Part 1

$
0
0

These are the notes as I heard them – feel free to comment below if you heard different.   I won’t repeat any of the slides that are already covered Roadmap Discussion: What is next for SAP BusinessObjects BI - Notes

 

Part 2 is here: Lumira - Today, Tomorrow, Part 2 of askSAP Call Notes

 

SAP’s Jayne Landry said that only 10% users have access to the analytics they need.

1fig.png

Figure 1: Source: SAP

 

Over 37% on the call said they were currently using Lumira 1.20

2fig.png

Figure 2: Source: SAP

 

Figure 2 shows attendee response as to whether Lumira meets their visualization needs.  Most said it meets “some of them”.

 

How is SAP Lumira Different

3fig.png

Figure 3: Source: SAP

 

Figure 3 was the agenda for Lumira

4fig.png

Figure 4: Source: SAP

 

Data wrangling is the “process of converting raw data into more convenient format for analysis/consumption” per SAP

Combine sources, transform the data as the data may not be in view you need it

5fig.png

Figure 5: Source: SAP

 

Visualizations “our brains are wired for visualizations”

6fig.png

Figure 6: Source: SAP

 

For visual discoveries you can use the “lightbulb” to show related visualizations

7fig.png

Figure 7: Source: SAP

 

On Figure 7 the Forrester agile visualization report was referenced - see it here

8fig.png

Figure 8: Source: SAP

 

Cindi Howson said it was “interesting but not surprising that integration w BI platform most important” as shown in the Figure 8 poll of the audience

 

Meeting with Customers

9fig.png

Figure 9: Source: SAP

 

Independent dealers don’t have IT people on staff

"Best tool used in last 10 years"

 

 

 

 

See this Daimler Truck video:

 

10fig.png

Figure 10: Source: SAP

 

 

Swisscom chose SAPLumira for Infographics, ESRI map integration and multiple data sets


Questions:

 

Q: Predictive and Lumira – SAP has Lumira, Predictive, InfiniteInsight

A: Predictive & InfiniteInsight will be merging

 

Q: What if you don’t want to run desktop version?

A: Can run Lumira Server or Cloud

 

Q: Trusted Data Discovery, Predictive  - HANA has, with ETL – does Lumira plan to mesh into that?

A: 2 platforms – in HANA app – run natively on HANA

Also on BI Platform with auditing, trusted data discovery

Lumira - Today, Tomorrow, Part 2 of askSAP Call Notes

$
0
0

Part 1 is here How Lumira Stacks up Against the Competition Ask SAP Call Notes– Part 1

 

SAP's Ty Miller lead this part of the webcast, with questions from John Appleby.

1fig.png

 

Figure 1: Source: SAP

 

Figure 1 shows the 1.20 Lumira features, including localized languages

 

One click toolbar options

 

Numbers are treated differently

 

Conditional formatting – analysis by exception

 

Hyperlinks on boards

 

Refresh infographics is a hit

 

"There a few #DataViz extensions already in SAP Lumira GitHub community" per SAP - see https://github.com/SAP/lumira-viz-library

 

Ty’s favorite is the extension to Google spreadsheets

 

2fig.png

Figure 2: Source: SAP

 

Figure 2 shows the next release of Lumira 1.21, which SAP expects to be released tomorrow

 

Cloud, on-premise, server, desktop

 

John asked "The agile market is fast; how is SAP keeping up?"  Ty's answer:

  • - Design councils, CEI, workshops, strategic advisory councils, influence councils, democratic platform is Idea Place
  • - We’ll see Lumira on Idea Place soon per SAP

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows the "first server release of Lumira on commodity hardware"

 

Lumira Edge edition beta launches tomorrow - install in 15 minutes on commodity hardware for small teams SAP Lumira, Edge will have in-memory engine but doesn't require HANA

 

Details are here: SAP Lumira, Edge edition: What Is It?

 

Runs on Windows Server or Windows 8

 

Only 4-8 GB of RAM

4fig.png

Figure 4: Source: SAP

 

Figure 4 shows what SAP is working on for 2015 – Lumira Edge

 

A velocity engine was mentioned

 

Renew integration of Lumira Server of BI Platform or Lumira Server with HANA

 

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows Lumira today.

 

11fig.PNG

Figure 6: Source: SAP

 

Figure 6 shows 2015 plans for Lumira by "persona".

10fig.PNG

Figure 7: Source: SAP

 

Figure 7 shows more 2015 Lumira plans

 

Finally another poll shows that over 78% plan to use Lumira in the next 3 months:

12fig.PNG

 

Source: SAP

 

Related:

Call for Speakers: Abstracts are Now Being Accepted for ASUG Annual Conference 2015

BI 2015

Troubleshooting SAP Lumira Desktop data acquisition from SAP BusinessObjects Universes

$
0
0

There's an anecdote I heard where someone asked Miles Davis his definition of Cool: "Give the essence. Withhold everything else."

 

It might seem quixotic to start off a blog about troubleshooting techniques for data discovery and reporting tools with a quote from one of the greatest jazz musicians ever, but, hey, I work with a couple of technologies that are pretty cool. I support SAP Business Intelligence and Analysis tools, primarily Web Intelligence, and now SAP Lumira.

 

Web Intelligence is cool since it uses a Semantic Layer - the Universe - to represent the data source to the user.  The Universe shields users from the complexity of the underlying implementation and presents data objects in an intuitive business language of dimensions, details and measures. The Semantic Layer gives the essence, allowing the WebI designer to hit the perfect note - the chart/graph/table that clarifies and illuminates the meaning of the data.

 

If WebI hits the perfect note, then Lumira is a melody - a series of notes that carry a narrative, providing context and flow for the presented information.  Lumira tells a story.

 

Now you don't have to use Universes with Lumira.  Lumira supports a variety of data sources. But there's compelling reasons to continue using Universes if you have a large investment in SAP BusinessObjects.  Combine Universes and Lumira, and you have something that's powerful - a story that tells the essence.  Cool jazz.

 

How Lumira Desktop consumes a Universe

 

Lumira Desktop consumes Universes that are managed by SAP BusinessObjects BI Platform 4.1.  At time of this blog, it's not able to consume stand-alone Universes outside of Platform. This is because Lumira uses the Web Intelligence Processing Server to consume data from the Universe - the Web Intelligence Processing Server that's part of BI Platform.

 

Here's a highly idealized logical architectural view below.  Lumira uses the BI Platform SDK to communicate with the Platform Central Management Server (CMS), and the WebI SDK to communicate with the WebI Processing Server:

Lumira_BOBJ_Diagram.JPG

Lumira first requests from the Platform Central Management Server (CMS) a list of Universes housed on the system.  Once the User selects one, Lumira then requests the WebI Processing Server to create a virtual Web Intelligence document for the Universe, then presents a Query Panel to the User.  The Query Panel is analogous to the Query Panel available with the WebI design tool. The User can then select dimensions, details and measures as result objects, and specify filtering along dimension values. 


Proceeding to the next step, Lumira will request the WebI Processing Server to process the query and return the data.  WebI Processing Server sends the query to the DSL Bridge Service, housed in an Adaptive Processing Server, to transform the query into a data-source-specific SQL statement. WebI Processing Server then executes the SQL via a database driver, retrieves the data from the RDBMs, marshals the data in XML format, and returns it to Lumira.


If the WebI Processing Server encounters any errors, it returns the error message back to Lumira.


Unfortunately, in many situations, the error message returned to Lumira, and which Lumira presents to the User, can be quite cryptic.  For example, here's an error message I encountered the other day:

 

Lumira_Desktop_New_Dataset_csEX.JPG

The error message in its entirety says "Cannot load selected universe: csEX".

 

That's not really actionable - it doesn't inform the User, me, what happened to cause that error, and doesn't guide the User on how this error can be resolved.

 

There's other messages that can be just as opaque.  Another such error is "ERR_WIS_30270 (WIS 30270)" - it's basically the WebI Processing Server stating an error was returned when it requested something from another service.  It doesn't state what the triggering error was, just that it wasn't the WebI Processing Server's fault.

 

There is, however, a way to find out the root cause error message. When you open a Support case with such an error, one of the first things the Support person (who can be me) will likely ask you to do is enable and collect traces for Lumira, WebI Processing Server, and the APS.

 

So you mind as well collect the traces before you even open up a Support ticket, since having ready traces will lead to faster resolution.  In fact, if you take a bit of time and go through the traces yourself, you might be able to resolve the issue without the help of SAP Support! Not that I don't enjoy working with customers, but whatever gets your Lumira Desktop up and running is all good.

 

Tracing Lumira Desktop and BI Platform

 

So let's walk through a trace exercise. So that you'll be able to follow along and perform the steps on your own deployment of Lumira Desktop (here I'm using 1.20) and BI Platform (my version is BI 4.1 SP04 Patch 2), let's use a UNX Universe created from the eFashion UNV Universe.  Start the Information Design Tool, log onto BI Platform, and convert the eFashion Universe you find in the "webi universes" folder into a eFashion.unx Universe.

 

To ensure the eFashion.unx is working (the backend datasource is MS Access), go to the WebI Java Report Panel, open the Universe and create a query:

 

WebI_Query_Panel.JPG

Here we select the dimensions "Year", "State", "City" and "Store name", and measure "Sales revenue".  A filter is defined were "State" is restricted to "California" and "New York".  Remember this, since you're going to be selecting the same objects and filter in Lumira next.  Run this query, to ensure you're getting data:

 

WebI_Report.JPG

The key here is, if you encounter any issues consuming an Universe in Lumira, first test the same Universe with the same query in Web Intelligence on the same server, using the Web Intelligence Applet interface in BI launch pad. If it doesn't work in WebI, it won't work in Lumira.

 

Now that we have a working Universe, let's first set up and enable tracing.  On the machine where you're running Lumira Desktop, follow SAP KBase 1782007 to modify the file BO_Trace.ini by default found at C:\Program Files\SAP Lumira\Desktop.  My file contains:


active = true;     

importance = xs;  

alert = true;     

severity = assert;

size = 10000;

keep = true;

log_dir = "C:/LumiraLogs/logs";

log_level=high;

log_ext = "log";

 

where I created the folder C:/LumiraLogs/logs to hold my Lumira trace files.  When you start up Lumira Desktop, you'll see this folder fill with trace files TraceLog_<PID>_<Timestamp>_trace.log:

 

Lumira_Desktop_Start_Logs.JPG

Now set up tracing for the WebI Processing Server and APS by logging onto the CMC:

 

CMC_Trace_High.JPG

Go to the Servers page, then open the "Service Categories" node and select "Web Intelligence Services".  Control-click to select the WebI Processing Servers and APS, right-click to "Edit Common Services", then set the TraceLog Service Log level to "High".  You don't need to restart the services to have the setting take effect.

 

Now consume eFashion.unx in Lumira Desktop.  Create a New Dataset, select Universes:

 

Lumira_Desktop_New_Dataset.JPG

choose the eFashion.unx Universe:

 

Lumira_Desktop_New_Dataset_eFashion.JPG

and define the query as we did for Web Intelligence.  Select the same Result objects:

Lumira_Desktop_New_Dataset_Result_Objects.JPG

where "Preview and select data" is enabled, then define the same filter:

Lumira_Desktop_New_Dataset_Filter.JPG

then check out the Preview, comparing to what you saw in Web Intelligence:

Lumira_Desktop_New_Dataset_Preview.JPG

 

and finally go forward to check the Prepare stage:

 

Lumira_Desktop_Prepare.JPG

Things look good here - there's no errors, so you'll not be looking for errors in the traces.  It's good to start of reading traces from a successful run, since that will give you an idea of the complete workflow that Lumira uses to consume Universe data sources.

 

Collect log files - take copies and move the copies to a folder where they won't get overwritten or deleted.  Collect the TraceLog_*.log files from the Lumira log folder.  Also, log onto the server machine housing your BI Platform deployment, and collect the WebI Processing Server and APS trace files, found on Windows deployments by default in the folder C:/Program Files (x86)/SAP BusinessObjects/SAP BusinessObjects Enterprise BI 4.0/logging.  Collect all files that's been updated since the time you started running Lumira.

 

Analyzing Lumira Traces

 

Now comes the fun part - analyzing what's happening inside Lumira when it consumes Universe data.  Open the Lumira TraceLog_*.log file using a text editor.  Personally, I use Notepad++ - it's quite popular and feature-rich, and we'll be using some of its features to analyzie the log.

 

You'll see that the log is fine-grained and very detailed.  What is of interest here, and our focus, is specific to how Lumira is interacting with BI Platform, and nothing else.  We can identify this interaction in the log by searching for specific keywords.  Whenever Lumira uses the BIPSDK or WebI SDK to communicate with Platform, it records a log entry with text starting "START OUTGOING CALL", and when the communication is done, a log entry with text starting "END OUTGOING CALL".

 

So in Notepad++, I go to Search -> Find... and "Find All in Current Document" the text "OUTGOING CALL" to observe:

 

Lumira_Trace_OUTGOING_CALL.JPG

 

you can see that Lumira makes a lot calls to get the data.  In Notepad++, if you click a line in the search result, it will focus the file view centered on that line.  Try clicking on a "START OUTGOING CALL", then a "END OUTGOING CALL".  You'll note that only a few lines separate "START" from "END" in each call - Lumira in this interval is merely waiting for a response from Platform for the request it has sent.  If you look before (above) a "START OUTGOING" line, you'll see Lumira preparing the 'message' it's going to send to Platform.  If you look after (below) a "END OUTGOING" line, you'll see Lumira retrieving, parsing and working on the information returned from Platform.

 

To get a cleaner view of all the communication that's happening, search on "END OUTGOING CALL" to see:

 

Lumira_Trace_END_OUTGOING_CALL.JPG

 

each "END OUTGOING CALL" line is very informative.  It describes whether the BIPSDK or WebI SDK channel was used, which BI Platform server the communication went to, and even the service request call made.

 

I won't describe each and every service call, but here's a highlight:

 

CentralManagementServer.newService - request for a Service from the CMS

CentralManagementServer.queryEx3 - request information from the CMS repository database

WebIntelligenceProcessingServer.createSession - create a new Web Intelligence Session

WebIntelligenceProcessingServer.createDocumentEx - create a new Web Intelligence document

WebIntelligenceProcessingServer.processDPCommandsEx - send a data provider request - i.e., create/modify/refresh a Universe query

WebIntelligenceProcessingServer.getDPResultsEx - retrieve where the data from a refreshed query can be retrieved

WebIntelligenceProcessingServer.getBlob - get a large binary object that contains the data

WebIntelligenceProcessingServer.closeInstanceEx - close the instance of a WebI session

 

For today's exercise, we'll focus on understanding the processDPCommandEx, getDPResultsEx, and the getBlob request calls - they're the ones doing the heavy lifting of creating the query, sending the query to the WebI Processing Server, and then retrieving the data as a Blob (binary large object) that contains the data in XML format. The call order is typically a whole bunch of processDPCommandsEx calls, followed by a couple of getDPResultsEx calls, then finally a few getBlob calls to get the data.

 

WebIntelligenceProcessingServer.processDPCommandsEx

 

Each time you create/modify/update/refresh a query, a processDPCommandsEx call is made by Lumira to the WebI Processing Server.  The particulars or a DPCOMMANDS request is structured as a XML sent to the WebI Processing Server.  You can find this XML by searching for the line "processDPCommandsEx command" in the log preceding a "START OUTPUT CALL" for the call itself.  After the "END OUTPUT CALL", you can read the response from the WebI Processing Server by searching for the line "processDPCommandsEx returns".

 

Here's an example of a command and response that requests an addition of a new query:

 

|processDPCommandsEx command

|<DPCOMMANDS><DPCOMMAND CID="0" CNAME="adddp"><DP><DS DSTYPE="DSL" /></DP><OUTPUTMODE><OUTPUTITEM OID="0" OTYPE="dplist" /><OUTPUTITEM OID="1" OTYPE="datasourceid" DPID="last" /></OUTPUTMODE></DPCOMMAND></DPCOMMANDS>

...

|TraceLog.Context: context var [ActionID]='Cuntkt3NC0wtk2YQsWNlaTI46'

...

|START OUTGOING CALL Outgoing: FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:37.75:1]

...

|END OUTGOING CALL Outgoing: SPENT [00.855] FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:37.75:1] TO [webiserver_BIPW08R2.WebIntelligenceProcessingServer.processDPCommandsEx#localhost:7004:8804.1499160:1]

...

|processDPCommandsEx returns

|<?xml version="1.0" encoding="UTF-8"?>

<METHODRETURN SERVER_VERSION="RELEASE[psbuild@VCVMWIN214] on Aug 13 2014 15:51:02" NAME="processDPCommandsEx">

   <ERRORS/>

   <DOCTOKEN VALUE="P0pi7004KT186{BIPW08R2}{BIPW08R2.WebIntelligenceProcessingServer}{AQnzqXNWe6pLvL8_GuAC7nE}" TYPE="persistent.new" WIDID="0"/>

   <PROTOCOLS>

     <PROTOCOL GLOBALVERSION="1.0.0"/>

   </PROTOCOLS>

   <METADATA/>

   <RETURNDATA>

     <OUTPUTS CID="0" CNAME="adddp">

       <OUTPUT OID="0" OTYPE="dplist">

         <DPLIST>

           <DP DPID="DP0" DPNAME="Query 1" DPORDER="0" DPTYPE="DSL"/>

         </DPLIST>

       </OUTPUT>

       <OUTPUT OID="1" OTYPE="datasourceid" DPID="DP0">

         <DP DPID="DP0" DPNAME="Query 1" DPTYPE="DSL">

           <DS DSID="&lt;datasource:DataSourceIdentifier xmi:version=&quot;2.0&quot; xmlns:xmi=&quot;http://www.omg.org/XMI" xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance" xmlns:datasource=&quot;http://com.sap.sl.datasource" xmlns:datasourceinfo=&quot;http://com.sap.sl.datasourceinfo" xmlns:info=&quot;http://com.sap.sl.repository.item.info"><dataSourceInfo xsi:type=&quot;datasourceinfo:DslUniverseDataSourceInfo&quot; dataSourceType=&quot;DSL&quot; caption=&quot;[BIPW08R2:6400]&quot; universeMetaVersion=&quot;dsl&quot;&gt;&lt;resourceInfo xsi:type=&quot;info:CmsItemInfo&quot; cuid=&quot;Aauu.TeuZUxPro7hBzVFKEs&quot; path=&quot;BIPW08R2:6400&quot;/&gt;&lt;/dataSourceInfo&gt;&lt;/datasource:DataSourceIdentifier&gt;" DSKEY="DS0" NAME="eFashion" LONGNAME="eFashion" DOMAINID="0" TYPE="DSL" CATALOG="DSL" OLAP="false" MAXOPERANDSFORINLIST="-1" ISMULTICONTEXTSELECTION="false" ALLOWCOMBINEDQUERIES="false" ALLOWSUBQUERY="false" ALLOWRANK="false" ALLOWPERCENTRANK="false" ALLOWSAMPLINGMODE="nosampling" ALLOWCALCULATION="false" ALLOWBOTHEXCEPT="false" ALLOWOPTIMIZATION="true" />

         </DP>

       </OUTPUT>

     </OUTPUTS>

  </RETURNDATA>

</METHODRETURN>

 

I've edited out much of the log file contents for this call, to focus on the outgoing DPCOMMANDS requesting the command "adddp" to create a new Data Provider for the query, the outgoing call, then the return for the request that specifies parameters for the newly-created Data Provider.  You'll see that every processDPCommandsEx call has the same pattern.

 

The most interesting DPCOMMANDS is the "refreshbatch" command, that requests the WebI Processing Server to generate a SQL Statement from the query, and use that to retrieve data from the data source.  Here's highlights from the trace file for a single refreshbatch request:

 

|TraceLog.Context: context var [ActionID]='Cuntkt3NC0wtk2YQsWNlaTIa9'

...

|processDPCommandsEx command

|<DPCOMMANDS><DPCOMMAND CID="0" CNAME="refreshbatch"><DP DPID="DP0" DPTYPE="DSL" /><OUTPUTMODE><OUTPUTITEM OID="0" OTYPE="documentdict" /><OUTPUTITEM OID="1" OTYPE="compatibleobjects" /><OUTPUTITEM OID="2" OTYPE="properties" /><OUTPUTITEM OID="3" OTYPE="dpparams" /><OUTPUTITEM OID="4" OTYPE="alteredhierarchies" /></OUTPUTMODE></DPCOMMAND></DPCOMMANDS>

...

|START OUTGOING CALL Outgoing: FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:38.272:1]...

...

|END OUTGOING CALL Outgoing: SPENT [01.285] FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:38.272:1] TO [webiserver_BIPW08R2.WebIntelligenceProcessingServer.processDPCommandsEx#localhost:7004:5484.1499462:1]

...

|processDPCommandsEx returns

|<?xml version="1.0" encoding="UTF-8"?>

<METHODRETURN SERVER_VERSION="RELEASE[psbuild@VCVMWIN214] on Aug 13 2014 15:51:02" NAME="processDPCommandsEx">

   <ERRORS/>

   <DOCTOKEN VALUE="P0pi7004KT214{BIPW08R2}{BIPW08R2.WebIntelligenceProcessingServer}{AQnzqXNWe6pLvL8_GuAC7nE}" TYPE="persistent.new" WIDID="0"/>

   <PROTOCOLS>

     <PROTOCOL GLOBALVERSION="1.0.0"/>

   </PROTOCOLS>

   <METADATA/>

   <RETURNDATA>

     <OUTPUTS CID="0" CNAME="refreshbatch">

       <OUTPUT OID="0" OTYPE="documentdict">

         <WAREHOUSE CAPTION="" SNAME="">

           <DATASOURCE CAPTION="" SNAME="">

             <FOLDER CAPTION="Variables" SNAME=""/>

             <FOLDER CAPTION="Formulas" SNAME=""/>

             <FOLDER CAPTION="GroupingVariables" SNAME=""/>

           </DATASOURCE>

           <DATASOURCE CAPTION="Query 1 - eFashion" SNAME="[Query 1]" KEY="DP0">

             <BODIMENSION CAPTION="City" SNAME="[City]" KEY="DP0.DOa" UUID="OBJ_166" SRCKEY="DS0.DOa" TYPE="string" FORMAT="" HELP="City located."/>

             <BODIMENSION CAPTION="State" SNAME="[State]" KEY="DP0.DO9" UUID="OBJ_218" SRCKEY="DS0.DO9" TYPE="string" FORMAT="" HELP="State located."/>

             <BODIMENSION CAPTION="Store name" SNAME="[Store name]" KEY="DP0.DOb" UUID="OBJ_376" SRCKEY="DS0.DOb" TYPE="string" FORMAT="" HELP="Name of store."/>

             <BODIMENSION CAPTION="Year" SNAME="[Year]" KEY="DP0.DO1" UUID="OBJ_188" SRCKEY="DS0.DO1" TYPE="string" FORMAT="" HELP="Year 2003 - 2006."/>

             <MEASURE CAPTION="Sales revenue" SNAME="[Sales revenue]" KEY="DP0.DO26" UUID="OBJ_147" SRCKEY="DS0.DO26" TYPE="number" FORMAT="T[Err]T[FS]T[Err]T[FS]T[FS]T[Err]" FORMAT_SAMPLE="#FORMAT; #FORMAT" HELP="Sales revenue $ - $ revenue of SKU sold" IS_AGGREGATED_OBJECT="Y" AGG_FUNC="Sum"/>

           </DATASOURCE>

         </WAREHOUSE>

       </OUTPUT>

     <OUTPUT OID="1" OTYPE="compatibleobjects">

       <DP DPID="DP0" DPNAME="Query 1" DPTYPE="DSL">

         <COMPATIBLEOBJECTS>

           <FLOWLIST>

             <OBJECT KEY="DP0.DO1" />

             <OBJECT KEY="DP0.DO9" />

             <OBJECT KEY="DP0.DOa" />

             <OBJECT KEY="DP0.DOb" />

             <OBJECT KEY="DP0.DO26" />

           </FLOWLIST>

         </COMPATIBLEOBJECTS>

         <STRIPPEDOBJECTS></STRIPPEDOBJECTS>

       </DP>

     </OUTPUT>

     <OUTPUT OID="2" OTYPE="properties">

       <DOCUMENTPROPERTIES>

         <DOCUMENTDPLIST>

           <DOCUMENTDP DPID="DP0" TYPE="DSL">

             <DOCUMENTDPINFO NAME="name" VALUE="Query 1"/>

             <DOCUMENTDPINFO NAME="source" VALUE="eFashion"/>

             <DOCUMENTDPINFO NAME="iseditable" VALUE="true"/>

             <DOCUMENTDPINFO NAME="isrefreshable" VALUE="true"/>

             <DOCUMENTDPINFO NAME="lastrefreshdate" VALUE="December 10, 2014 2:02:05 PM GMT-08:00"/>

             <DOCUMENTDPINFO NAME="lastrefreshtime" VALUE="1418248925"/>

             <DOCUMENTDPINFO NAME="lastrowcount" VALUE="12"/>

             <DOCUMENTDPINFO NAME="lastrefreshduration" VALUE="1"/>

             <DOCUMENTDPINFO NAME="ispartiallyrefreshed" VALUE="false"/>

             <DOCUMENTDPINFO NAME="samplingresultmode" VALUE="nosampling"/>

           </DOCUMENTDP>

         </DOCUMENTDPLIST>

       </DOCUMENTPROPERTIES>

     </OUTPUT>

     <OUTPUT OID="3" OTYPE="dpparams">

       <DP DPID="DP0" DPNAME="Query 1" DPTYPE="DSL">

         <PARAMS></PARAMS>

       </DP>

     </OUTPUT>

     <OUTPUT OID="4" OTYPE="alteredhierarchies">

       <HIERARCHIES></HIERARCHIES>

     </OUTPUT>

   </OUTPUTS>

</RETURNDATA>

</METHODRETURN>

 

 

You can see that the returned XML from a "refreshbatch" command gives a very detailed view of the objects returned by the query, and the configuration parameters associated with the running of the query.

 

 

WebIntelligenceProcessingServer.getDPResultsEx


The getDPResultsEx call is made after a processDPCommandsEx "refreshbatch" request, to request from the WebI Processing Server what Blob object to request to retrieve the data gathered with the "refreshbatch".  Here's a highlight of a single getDPResultsEx call in the logs:

 

|TraceLog.Context: context var [ActionID]='Cuntkt3NC0wtk2YQsWNlaTIa2'

...

|START OUTGOING CALL Outgoing: FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:38.257:1]

...

|END OUTGOING CALL Outgoing: SPENT [00.037] FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:38.257:1] TO [webiserver_BIPW08R2.WebIntelligenceProcessingServer.getDPResultsEx#localhost:7004:12768.1499455:1]

...

|execute returns
<?xml version="1.0" encoding="UTF-8"?>

<METHODRETURN SERVER_VERSION="RELEASE[psbuild@VCVMWIN214] on Aug 13 2014 15:51:02" NAME="getDPResultsEx">

   <ERRORS/>

   <DOCTOKEN VALUE="P0pi7004KT212{BIPW08R2}{BIPW08R2.WebIntelligenceProcessingServer}{AQnzqXNWe6pLvL8_GuAC7nE}" TYPE="persistent.current" WIDID="0"/>

   <PROTOCOLS>

     <PROTOCOL GLOBALVERSION="1.0.0"/>

   </PROTOCOLS>

   <METADATA/>

   <RETURNDATA>

     <BLOB KEY="TMP*23*15" FILE="Blob15.xml" />

     <BLOBMANAGER ROOT="C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\Data\BIPW08R2_6400\BIPW08R2.WebIntelligenceProcessingServer\sessions\_AQnzqXNWe6pLvL8_GuAC7nE\BRep\23/" />

   </RETURNDATA>

</METHODRETURN>

 

The returned XML references the file in the cache where the WebI Processing Server stored the data retrieved from the database by running the query.  If you were to log onto the server machine where the WebI Processing Server is running, and open the Blob15.xml file in a text editor, this is what you'll see:

 

<DP_RESULTS>

  <DATAPROVIDER ID="DP0">

    <FLOWDATA  ID="0">

      <COLUMNS>

        <COLUMN DSID="DS0.DO1" DPID="DP0.DO1" FORMAT="" FIELDNAME="F1" NAME="Year" TYPE="character" />

        <COLUMN DSID="DS0.DO9" DPID="DP0.DO9" FORMAT="" FIELDNAME="F2" NAME="State" TYPE="character" />

        <COLUMN DSID="DS0.DOa" DPID="DP0.DOa" FORMAT="" FIELDNAME="F3" NAME="City" TYPE="character" />

        <COLUMN DSID="DS0.DOb" DPID="DP0.DOb" FORMAT="" FIELDNAME="F4" NAME="Store name" TYPE="character" />

        <COLUMN DSID="DS0.DO26" DPID="DP0.DO26" FORMAT="T[Err]T[FS]T[Err]T[FS]T[FS]T[Err]" FIELDNAME="F5" NAME="Sales revenue" TYPE="number" />

      </COLUMNS>

      <ROWS STARTINDEX="0" ORDER="" >

        <ROW F1="2004" F2="California" F3="Los Angeles" F4="e-Fashion Los Angeles" F5="982637.1" />

        <ROW F1="2004" F2="California" F3="San Francisco" F4="e-Fashion San Francisco" F5="721573.7" />

        <ROW F1="2004" F2="New York" F3="New York" F4="e-Fashion New York 5th" F5="644635.1" />

        <ROW F1="2004" F2="New York" F3="New York" F4="e-Fashion New York Magnolia" F5="1023060.7" />

        <ROW F1="2005" F2="California" F3="Los Angeles" F4="e-Fashion Los Angeles" F5="1581616" />

        <ROW F1="2005" F2="California" F3="San Francisco" F4="e-Fashion San Francisco" F5="1201063.5" />

        <ROW F1="2005" F2="New York" F3="New York" F4="e-Fashion New York 5th" F5="1076144" />

        <ROW F1="2005" F2="New York" F3="New York" F4="e-Fashion New York Magnolia" F5="1687359.1" />

        <ROW F1="2006" F2="California" F3="Los Angeles" F4="e-Fashion Los Angeles" F5="1656675.7" />

        <ROW F1="2006" F2="California" F3="San Francisco" F4="e-Fashion San Francisco" F5="1336003.3" />

        <ROW F1="2006" F2="New York" F3="New York" F4="e-Fashion New York 5th" F5="1239587.4" />

        <ROW F1="2006" F2="New York" F3="New York" F4="e-Fashion New York Magnolia" F5="1911434.3" />

      </ROWS>

    </FLOWDATA>

  </DATAPROVIDER>
</DP_RESULTS>

 

exactly the data as you saw in the Preview window.

 

WebIntelligenceProcessingServer.getBlob

 

The getDPResultsEx gets the reference to the Blob XML file that contains the data, and it's up to the getBlobInfos and the getBlob calls to actually retrieve this file from the WebI Processing Server, so that Lumira can parse the contents to create a flat file.  Here's highlights from a getBlob call:

 

|TraceLog.Context: context var [ActionID]='Cuntkt3NC0wtk2YQsWNlaTIb8'

...

|START OUTGOING CALL Outgoing: FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:63.303:1]

...

|END OUTGOING CALL Outgoing: SPENT [00.035] FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:63.303:1] TO [webiserver_BIPW08R2.WebIntelligenceProcessingServer.getBlob#localhost:7004:10372.1499477:1]

...

|[com.businessobjects.sdk.core.server.internal.blob.BlobInterpreter]Chunk request fullfilled from initial (getBlob) buffer

|[com.businessobjects.sdk.core.server.internal.blob.BlobStream]Read 1794 bytes from current read-ahead buffer

...

|[com.sap.hilo.model.toolkit.DataFlowToolkit]End CSV file generation :1757 ms

 

The requested Blob XML file is streamed back to Lumira, that reads and parses the contents and generates the CSV passed to the Prepare engine.

 

 

Analyzing BI Platform Traces

 

By reading the Lumira traces and identifying the lines where it communicates with Platform, you can see how Lumira interacts with Platform to consume Universe data sources.  But that's a pretty one-sided view.  We can see the request going out from the Lumira side, but don't see how the Platform side is handling the request.

 

We should be able to see how Platform handles the request by reading the WebI Processing Server and APS traces, but usually there's a problem.  If you look at the logs you've gathered from Platform - all the *.glf files - you'll see that the total size is quite large.  That's expected, since for typical development or even product Platform deployments, you can't guarantee that only you are utilizing the system.  There may be other Users, in fact many other Users, also using the system, the actions all captured in the trace files you've collected.

 

The main difficulty in analyzing Platform trace files is to eliminate all irrelevant information from them, and focus only on the trace entries relevant to your workflow.  That's actually not very difficult to do.

 

If you look at all the highlights from the Lumira traces I've given above, you'll see that I've always included lines of the form:

 

|TraceLog.Context: context var [ActionID]='Cuntkt3NC0wtk2YQsWNlaTIb8'


The "ActionID" line always appears before a "START OUTGOING CALL" is made, and you'll see that each call has an unique ActionID value associated with it.  In the BI Platform TraceLog framework, each and every client request that requests a service from Platform services always generates an unique GUID value, and sends that to the services with the request.  Every time a Platform service services such a request, it records this ActionID value in the trace file and Audition DB entries.

 

In this way, you can associate all Platform trace file entries that was triggered by a client request, and follow the workflow from when the request first came in, until the final response sent back to the client.  Here, the client is the Lumira Desktop.

 

The easiest way to read BI Platform trace files is to use the GLFViewer, available via SAP KBase 1909467.  If you don't have the GLFViewer already, open that KBase and download the viewer to your client machine.  Run the GLFViewer and File -> Open and read all the glf files you've gathered from Platform. Ensure all the information in the glf files are presented by going to View -> Choose Columns... and ensuring all columns have been enabled.  Find the "ActionID" column entry near the bottom, select it and click on the "Move Up" button till it's near top of the order:

 

GLFViewer_Choose_Columns.JPG

 

Let's filter the trace entries to only show the ones associated with the processDPCommandsEx "refreshbatch" request call highlighted above.  That call has:

 

|TraceLog.Context: context var [ActionID]='Cuntkt3NC0wtk2YQsWNlaTIa2'

 

and in the GLFViewer go to View -> Filter... and select "ActionID" for the Column and enter Cuntkt3NC0wtk2YQsWNlaTIa2 into the text entry:

 

GLFViewer_Filter_ActionID.JPG

and click "OK" to filter.

 

Select View -> Indent Text According to Scope and you get quite a pretty display:

 

GLFViewer.JPG

 

The very first line for the servicing of this ActionID is the WebI Processing Server entry:

 

START INCOMING CALL Incoming:processDPCommandsEx FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:38.272:1] TO [webiserver_BIPW08R2.WebIntelligenceProcessingServer.processDPCommandsEx#localhost:7004:5484.1499462:1]

 

and the very last line is the WebI Processing Server entry:

 

END INCOMING CALL Incoming:processDPCommandsEx SPENT [1.282] FROM [Webi SDK.CorbaServerImpl.doProcess()#BIPW08R2:11432:38.272:1] TO [webiserver_BIPW08R2.WebIntelligenceProcessingServer.processDPCommandsEx#localhost:7004:5484.1499462:1]

 

and you can see that the "START OUTGOING CALL" in the Lumira traces led to a "START INCOMING CALL" in the WebI traces.  When WebI is done servicing this request and ends with "END INCOMING CALL", led to a "END OUTGOING CALL" in the Lumira traces.

 

If you search for "START OUTGOING CALL" in this filtered Platform traces, you'll see the entry made by the WebI Processing Server:

 

START OUTGOING CALL Outgoing:doIt FROM [webiserver_BIPW08R2.WebIntelligenceProcessingServer.processDPCommandsEx#localhost:7004:5484.1499462:1]

 

that's immediately followed by an entry by the APS:

 

START INCOMING CALL Incoming: FROM [webiserver_BIPW08R2.WebIntelligenceProcessingServer.processDPCommandsEx#localhost:7004:5484.1499462:1] TO [.doIt#BIPW08R2:4244:10794286.19727081:1]

 

This represents the WebI Processing Server calling the DSL Bridge Service in the APS to request generation of a SQL statement, as described in this trace entry by the APS:

 

NativeQueryNode::NativeQueryString: SELECT Calendar_year_lookup.Yr, Outlet_Lookup.State, Outlet_Lookup.City, Outlet_Lookup.Shop_name, sum(Shop_facts.Amount_sold)

FROM

  Calendar_year_lookup, Outlet_Lookup, Shop_facts

WHERE

  ( Outlet_Lookup.Shop_id=Shop_facts.Shop_id  )

  AND  ( Shop_facts.Week_id=Calendar_year_lookup.Week_id  )

  AND  Outlet_Lookup.State  IN  ( 'California','New York'  )

GROUP BY

  Calendar_year_lookup.Yr,

  Outlet_Lookup.State,

  Outlet_Lookup.City,

  Outlet_Lookup.Shop_name

 

Keep following the traces to see the APS trace entry "END INCOMING CALL" after passing back the SQL to the WebI Processing Server, that acknowledges it by a "END OUTGOING CALL".

 

You can see how you can trace every request sent by Lumira to Platform, and follow along till completion.  If you've been generating your own traces, try following the other calls made by Lumira by identifying the associated ActionID, then filtering the Platform traces using the GLFViewer.

 

 

Root Cause Error Identification using Traces

 

As you saw above, the trace framework that both Lumira and Platform uses may be used to gain tremendous insight into the internal workflow used by Lumira when consuming data from a Universe.  But how does this help with root cause error identification, for example for the "csEX" error shown above?

 

This is the workflow that's going to be used:

 

  1. Enable tracing on Lumira, WebI Processing Server and the APS.
  2. In Lumira, start consumption of Universes till the error appears.
  3. Collect all the traces.
  4. In the Lumira TraceLog traces, find the error message you saw. Search upwards for the "START OUTGOING CALL" that returned the error from WebI Processing Server.
  5. Find the ActionID associated with the "START OUTGOING CALL".
  6. Open the WebI Processing Server and APS traces in the GLFViewer.
  7. Filter with the ActionID value.
  8. Search the filtered traces for the first error encountered.

 

I'll follow this with the traces I had gathered for the "csEX" error.  I open the Lumira TraceLog file in Notepad++, and starting from the botton of the file, I search upwards for the "csEX" error message to find the following entries:

 

|TraceLog.Context: context var [ActionID]='Cv3UgdIoskY5vV3HXoDoa9E46'

...

|processDPCommandsEx command

|<DPCOMMANDS><DPCOMMAND CID="0" CNAME="adddp"><DP><DS DSTYPE="DSL" /></DP><OUTPUTMODE><OUTPUTITEM OID="0" OTYPE="dplist" /><OUTPUTITEM OID="1" OTYPE="datasourceid" DPID="last" /></OUTPUTMODE></DPCOMMAND></DPCOMMANDS>

...

|processDPCommandsEx returns

|<?xml version="1.0" encoding="UTF-8"?>

   <METHODRETURN SERVER_VERSION="RELEASE[psbuild@VCVMWIN214] on Aug 13 2014 15:51:02" NAME="processDPCommandsEx">

   <ERRORS>

     <ERROR COMPONENT="WIS" ERRORCODE="0" ERRORTYPE="USER" MESSAGE="csEX" PREFIX="ERR">

       <DEBUGINFO BORESULT="" FILENAME="kc3dsdsl_server.cpp" LINEPOSITION="1028" MODULENAME="C3_DSDSL_SERVER"/>

       <REQUESTINFO COMMANDID="0" COMMANDNAME="Add DP" DPID="" DPLONGNAME="" DPNAME=""/>

       <REASON>

         <CONTENT></CONTENT>

       </REASON>

     </ERROR>

     <ERROR COMPONENT="WIS" ERRORCODE="0" ERRORTYPE="SUPERVISOR" MESSAGE="csEX" PREFIX="ERR">

       <DEBUGINFO BORESULT="" FILENAME="kc3dsdsl_server.cpp" LINEPOSITION="2101" MODULENAME="C3_DSDSL_SERVER"/>

       <REQUESTINFO COMMANDID="0" COMMANDNAME="Add DP" DPID="" DPLONGNAME="" DPNAME=""/>

       <REASON>

         <CONTENT></CONTENT>

       </REASON>

     </ERROR>

  </ERRORS>

...

</METHODRETURN>

 

the error return is preceded by a START OUTGOING CALL with ActionID=Cv3UgdIoskY5vV3HXoDoa9E46.  I open the WebI Processing Server and APS trace files in the GLFViewer, and filter on this ActionID value.

 

I then search from the top of the trace entries down for Trace column having value "Error" and find:

GLFViewer_csEX_Root_Cause.JPG

 

and the error originated in the APS DSL Bridge Service:

 

Caused by: java.lang.ClassNotFoundException: com.sap.connectivity.cs.extended.ConnectionServer

  at java.net.URLClassLoader$1.run(URLClassLoader.java:255)

  at java.security.AccessController.doPrivileged(Native Method)

  at java.net.URLClassLoader.findClass(URLClassLoader.java:243)

  at java.lang.ClassLoader.loadClass(ClassLoader.java:372)

  at java.lang.ClassLoader.loadClass(ClassLoader.java:313)

  at java.lang.Class.forName0(Native Method)

  at java.lang.Class.forName(Class.java:249)

  at com.businessobjects.connectionserver.ConnectionServer.getImplementation(ConnectionServer.java:428)

 

The DSL tried to load a Java class file that defines the csEX - the extended ConnectionServer library - but could not find it.

 

Searching SAP KBase for that error entry turns up a single hit - KBase 2030389, that suggests the issue is with loading the cs_ex.jar file.  I find and check the integrity of that file, and indeed, discover that it was somehow corrupted.  Replacing that file with a good one, and restarting the APS DSL Bridge Service, and the error goes away.

 

Summary

 

I hope you find the above information useful - Universes has tremendous upsides when it comes to making a complex data schema accessible to users.  The challenge lies when errors are thrown by the Web Intelligence Processing Server that's servicing a request from Lumira for data from the Universe.  The error messages at times may not truly identify the root issue that's caused the error.  In that case, you can use the TraceLog framework that both Lumira and BI Platform uses, to trace back the error seen in Lumira back to the originating cause.

 

 

Ted Ueda has been supporting SAP BusinessObjects and Analytics for close to ten years, and still finds the job fun every single day.  When not working, he's somewhere outdoors trying not to be eaten by bears.

Lumira Data Access Extension - Box Office Statistics

$
0
0

Hi all,

 

Less than a month ago, I started using SAP Lumira for data visualization. I was impressed, both with the ease with which I could make beautiful and interesting visualizations from data and with the variety of sources Lumira could draw that data from. Recently, however, I've been looking into the support Lumira offers for those data formats that it can't connect to directly.

 

For those sources, Lumira offers the ability to create and use Data Access Extensions (DAE), console applications that allow it to connect with types of data sources it normally could not. A DAE is simply a console application that takes input parameters, reads the data from the source in question, formats it as character-separated values (CSV), and prints it to the console. The language and environment used for developing the DAE don't matter - as long as the end result is an application that Lumira is able to execute which returns the data in the expected format, you can use whatever is most comfortable for you.

 

In order to learn about DAE, I first followed the basic example in the official SAP Developer Guide, implementing an extension as a Java application to read data from an XML file - Lumira can already accept XML, but this example provided a good, simple introduction. I also read through the excellent blog post of Trevor Dubinsky, where he introduced DAE in an easily understandable way. I recommend both of these resources if you're just starting out with these extensions.

 

After learning about DAE and implementing the sample XML case, I wanted to target something closer to a real use case, pulling data from an online source that wouldn't be available to Lumira without the benefit of DAE. I chose to interpret box office data from the table found at http://boxofficemojo.com/weekend/chart/, since I thought it would be interesting to visualize and I liked the fact that the data would change every week, rather than staying static. Since the data was stored as an HTML table, my extension would need to get the HTML document from the site and parse the data from there.

 

01-mojo-chart.png

 

I chose to implement my DAE in Java using Eclipse as an IDE, so I looked for a library to parse HTML into Java. I found JSoup, and by importing their library and reading the documentation it was relatively simple to connect to the site, pull the HTML document to a Document object, and write a program to parse out the data and write it to console as CSV.

 

02-eclipse-output.png

 

At this point, my code had the input and output required to act as a DAE, but it did not fit the requirement of being an executable application. Natively, Eclipse supports exporting a Java project as a runnable .jar, but the documentation I've read specifies that a Lumira DAE must be a .exe. As a work-around, a friend turned me on to the utility JSmooth, which creates native Windows launchers (in the form of .exes) that run .jar files in turn. By wrapping my exported code in this, and placing both files in Lumira's extension folder, I was able to achieve success. (This is a bit inconvenient, both because of the extra step and the extra file dependency - if anyone knows a better way to implement DAE in Java, please leave a comment or send me a message and I'll update the guide)

 

03-jsmooth.png

 

Now that I had a working DAE in Lumira, pulling live data from the HTML on the website, I composed a simple data story. I focused on a pie chart breakdown of the weekend box office gross by title and week number (since it's worth showing, for example, when a low-grossing movie is doing so because it's been in theatres successfully for 20 weeks already) and a bubble chart, showing total gross in comparison to budget, with bubble size indicating current weekend gross (and thus correlating with the earning power the movie has left before its run ends). Sadly, the budget data is missing for a fairly large number of entries. I also added a filter panel so that users can compare movies that have been in theatres for similar amounts of time to one another.

 

04-lumira-preview.png

 

Thank you all for reading, and I hope you have found an interest in this exciting feature of SAP Lumira.

 

Regards,

Ben Wilder

READ ME: SAP Lumira 1.21

$
0
0

Hi Everyone

 

As you know SAP Lumira 1.21 is now on SMP.  Unfortunately there were a couple of issues on the data connection UI on Lumira Desktop that has caused confusions.  We apologize for the inconvenience this has caused and we are looking for the earliest possible improvements.

 

 

1.  In some installations, selection to connect to Universe does not show up in the New Dataset dialog.

 

This is an UI issue caused by a stale cache. At this time it needs to be cleaned manually by deleting the configuration cache.

Please refer to SAP Knowledge Base 2109423 for the steps.

 

 

2. There is a string error in the New Dataset dialog for connectivity option for SAP BW (Business Warehouse). It is caused by a tool error at the last minute inserting an incorrect string. There is no functional changes with regards to Lumira support for BW in this release.

 

"Download from SAP Business Warehouse" should say "Connect to SAP Business Warehouse" as it has been in SAP Lumira 1.17-1.20.  This connectivity option allows you to connect to BEx Query of an InfoProvider and visualize the data.

 

Option to acquire data from SAP BW and mash up with another data is planned in 2015 and it will include ability to compose stories and share the story to a server.

 

Once again we apologize for the issues and we are looking into better addressing the problems at the earliest possible time.

 

Thank you.

Sharon

 


Viewing all 855 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>