jump to navigation

Five Steps to Insights March 8, 2017

Posted by stewsutton in Big Data, business analytics, business intelligence, Collaboration, Computational Knowledge, Information Technology, Knowledge Management.
add a comment

There are some very simple steps (five in this example) that can be taken as a curious person with the right tools seeks to understand what the data has to tell them.  Generally it starts with a simple question. You want to know something. This sends you on a quest to gather some data. Often this gather process is quite time consuming. In some scientific endeavors, the gathering is the tedious process of “recording” the data that you observe in your experiment.  This may take many days, weeks, or sometimes years.  Then comes the next step and that is the preparation of that data for exploration. The way data is recorded and gathered is seldom the structure needed for reporting.  The data must be transformed and reshaped.  This is not changing the values in the data, but rather it is molding the way the data is organized so that it can be explored with data visualization tools.  At this point I believe the fun part begins. This is the moment where you may begin to explore the data.  Exploration is a very good label for what happens at this point.  You are navigating and observing what is there and you are seeing things for the first time.  This process is exciting and it often brings you insights and understanding that you can share with others.

Computational Knowledge February 4, 2014

Posted by stewsutton in Architecture, Big Data, business intelligence, Collaboration, Computational Knowledge, Economics, Education, Knowledge Management.
add a comment

Right now we have a serious need for more students to fall in love with all of the STEM subjects, which fall into the categories of science, technology, engineering, and mathematics. We know these fields fuel economic growth, so training a STEM workforce has been recognized as a key goal in education policy. And yet, there is an enthusiasm gap in these subject areas, and nowhere is that more evident than math. In the United States, students don’t think they’re good at math, so they become quite adapt at hatting it. Many students it seems would rather eat broccoli than do math homework (and that is within a culture raised on fast-food where the concept of broccoli is viewed as utterly disgusting). Not surprisingly, these students are significantly underperforming. So how do we change this?

The way we teach math needs to be reinvented!

In a nutshell, “students need visual and interactive curriculum that ties into real life.” Nowhere is the power of how good mathematical instruction better demonstrated than within the environment of Wolfram Mathematica.

Properly teaching math breaks math down into four components:

1. Posing the right questions
2. Turning a real world problem into a math formulation
3. Computation
4. Turning a math formulation back to the real world, verifying it.

We spend perhaps 80 percent of the time in math education teaching people to do #3 (computation by hand) — This is the one step that computers can do better than any human after years of practice. Why are we doing this?

Instead, let us use computers to calculate. After all, that’s the math chore we hate the most. It may have been necessary to teach this skill 50 years ago. There are certainly a few practical examples of how hand-calculation can be useful today.

The goal of the Wolfram technology is to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. We see this technology achieving some pretty spectacular levels of performance in Wolfram|Alpha and within Mathematica as well. Integrating this form of computational knowledge within classrooms is going to have a powerful multiplying effect on student performance and understanding as they orient themselves to solving real-life problems with the power of computational knowledge.

Big Data July 13, 2013

Posted by stewsutton in Architecture, Big Data, business analytics, Cloud, Cloud Computing, Information Technology, Knowledge Management.
add a comment

Perhaps you have heard of the term “big data.” Well does it seem to be rising atop the curve of inflated expectations? It is probably a healthy perspective to be just a bit suspicious of “big data” solutions coming to the rescue where all others have been unsuccessful.

There are certainly examples where scientists compare approaches to problem solving and this includes conversations about big data. Big problems need solutions that can operate at “big” scale, and the phenomenon of big data is certainly real. The three Vs of volume, velocity and variety, coined by the Gartner Group, have helped us to frame the characteristics of what we understand as big data.

Ultimately it is how these “problems” get solved by using distributed data and distributed processing. Some will do things “internally” while others will take to the cloud. But as many have already experienced, some of the “cloud benefits” (related to “bursty” allocation against resource) are not there for “big data” configurations.

Said more simply, the benefits of lightly touching the cloud resources and getting the financial benefit of this time-sharing is diminished for big data problems that keep the resources fully utilized and thereby incur the highest order of expense against the cloud infrastructure. This reality affects how we must architect solutions that cross into the cloud and make use of “heavy lifting” within our own corporate intranet infrastructure. It keeps the “big data” problem interesting for sure.

With all of that being said, it’s quite another thing when you start to hear how big data is going to upend everything. It is quite unlikely that big data will usher in a “revolution” to transform how we live, work, and think. We do well to approach the topic of big data as just a new tool in the toolkit and use it for those problems where it makes sense.