Find articles and interviews containing content from Howard.
Data preparation capabilities are emerging that will provide business users and analysts the ability to extend the scope of self-service – One of the biggest challenges faced by any enterprise seeking business insights from data is the need to depend on a central IT organization. IT traditionally uses Extraction, Transformation and Loading (ETL) tools and routines for bringing data from across the enterprise into a data warehouse, using tools such as Informatica, IBM InfoSphere DataStage. The problem with this approach has been (i) business has to get into an IT queue and wait, (ii) clunky tools that requires IT skills for even basic usage; and (iii) data transformation (if applied during the ETL process) becomes outdated quickly as business conditions change. Typically the process of Data Preparation achieves two objectives, (i) aggregates enterprise data from across different source systems into a data warehouse for ease of end-use and (ii) attempts to create a single version of the truth within a data warehouse. Emerging self-service Data Preparation tools such as Trifacta, Paxata have made both these processes redundant i.e., (i) there is no need to physically aggregate data as long a logical layer that has the potential to marry the semantics of data from across the enterprise, just-in-time, for business user self-service i.e., let the data live where it is and integrate it at the end point of the business solution needs, obviously under proper governance; and (ii) are driven on end user demand versus IT supply principle i.e., business user needs driven versus IT capacity and understanding driven. If you want to know more about what’s happening in the Data Preparation market, please see the recent study by Dresner Advisory.
Salesforce.com today announced updates to its Wave Analytics Cloud, which the company first introduced at Dreamforcei n October. The updates will allow users to import and analyze new data sources, build entirely new dashboards, and share insights.
The Wave Analytics Cloud was designed to enable sales reps to interpret large data sets and analyze customer trends through their mobile phones without referring to another machine or expert. Using an advanced computing engine, it eliminates the need of going through a sorting process before analyzing information.
Howard Dresner, chief research officer at Dresner Advisory Services, reiterates that the trend toward on-the-go use is apparent. “We’ve been researching mobile computing and mobile business intelligence (BI) for six years now and have seen its importance grow dramatically since then,” he says. “There are some users that will never use BI on a laptop or desktop, and those numbers will continue to grow. So, mobile-first makes great sense as a forward-thinking approach to usage.”
When I asked my weekly #BIWisdom tweetchat tribe of users, vendors and consultants about their observations and experience with open source in business intelligence their comments that Friday whirled about like dancers. Their tweeted comments from all around the world focused on whether open source is — or will be — a difference-maker in BI solutions.
In our studies in our annual Wisdom of Crowds® Business Intelligence Market Studies, we saw increased interest in open source in 2014 than in prior years. It’s not yet ranked as a high priority among the 22 technology areas that we track in our surveys, but its importance rose in 2014 after years of stagnation. Moreover, respondents’ interest is distributed across industry verticals and geographies.
A #BIWisdom tribe member tweeted an observance that there is also a definite uptick in open source extensions to existing BI solutions.
Editor’s note: Is the growth of mobile business intelligence flat, ready for takeoff or growing beyond expectations? What are the current business preferences for the various mobile apps and platforms? How is cloud computing impacting mobile BI? These and many other trends are revealed in the 2014 Mobile Computing / Mobile Business Intelligence Market Study recently published by Dresner Advisory Services. I spoke with Howard Dresner about the study’s findings and what’s really happening in mobile BI.
Nobody likes audits. But ensuring data quality is foundational to all business intelligence endeavors. How can a business make quality decisions with poor quality data? A tribe member in one of my weekly #BIWisdom tweetchats mentioned he read an article about a survey finding that 80 percent of companies claim they deal with poor data quality, yet less than 52 percent of those companies consider doing a data quality audit. Those statistics sparked a boisterous bunch of tweets with opinions and questions from BI users, consultants and vendors including:
“How could the respondents in that study claim poor quality if they didn’t audit it? Is quality simply a matter of perception?”
“No, it’s not perception. Data quality has been proven to be a dependency for success or failure in BI initiatives – whether or not it’s called out as the underlying factor.”
“So shouldn’t the precursor to any BI initiative be a data quality audit? Otherwise, the outcomes may be flawed.”
“Data quality is important, but it shouldn’t impede the initial progress of self-service data discovery. Then when you have potentially interesting through self-service data discovery, then you add data quality governance.”
“It’s impossible to always have good data on the front end; there are too many exceptions and shortcut coding.”
“But you can’t edit all the data. What about unstructured data and also external data?”
“The idea is to identify the Key Data Elements and focus on those during the audit.”
“The necessity for data quality should depend on the impact of the potential decision.”
“Who is responsible for pushing the audit – IT, line of business, top executives?”
“LOBs need to be able to see where strategic failures come from. If data quality is suspect, then they should demand an audit.”