For: companies who want to further grow revenue by leveraging data.Who are dissatisfied: with current limitations of data science not running at the enterprise level, 24/7.Infornautics is a: database and software engineering team that builds client-owned IP, analyzing data across the entire enterprise, in real-time.That provides: identification of material and anomalous data events, focused on non-obvious relationships between different data points.Unlike: using a workstation, importing data and running desktop analysis softwareInfornautics: builds enterprise-wide data hunting algorithms, sequencing event materiality and proactively notifying leadership via easy-to-understand narratives. |
What We Do: Infornautics builds "computational trust" applications you can't google how to build. We are industrial builders of, one of a kind, bleeding edge technologies. Our endgame is to give each client a significant and sustainable competitive advantage in their industries. As data scientists/engineers, our goal is not to organize data, but to locate value and answers hidden in that data using both deductive algorithms and inductive machine learning.Who Are We : For over 15 years, Infornautics has been building custom algorithms and machine models that uncover actionable intelligence in raw data. We work at the pure skill level, with rights to all work product and IP owned by our clients. Our intent is not to maintain existing technology, but to move technology to the next level. Our starting point is where all other applications stop - with the unknown only being a problem to be solved. |
Blueprinting Technology Projects: Most IT projects are plagued with cost overruns and never deliver on time. We have learned that scope creep comes from not locking requirements upfront and getting executive buy-in. Projects we lead deliver on time and on budget. Upfront, we interview a diverse set of stakeholders for feature and expectations identification. We then build a blueprint of the final project and exactly what is expected for Release 1. This blueprint is presented to executives for final signoff before any part of the actual software is started.Storytelling and Narratives: Time has never been more precious. Machine models are complex and outputs are hard to read. The last step of our data engineering process is to setup reports for executives that capture material events in data. These events are organized and emailed as event narratives each morning. In these reports are links to more detailed analysis, if executive wants to do further research. |
Research Areas:
A future iteration of blockchain will be more of a game changer than artificial intelligence. All commerce will go
peer-to-peer with each of us holding our own data on our own node in our own home.
|
Artificial Intelligence: We believe humans will have a hard time trusting A.I. without being able to “audit” outputs. A.I. will gain traction as a B2B and back office technology that improves human-to-human interactions. For example, doctors will deliver treatment, A.I. will decide the treatment. Machine models are valuable in data analysis, but outputs will need to be supported by detailed narratives, with a beginning and end.Anomaly Hunting: Wide deviations in numerical values wreak havoc on data analysis. One extreme value can affect all statistical outputs. Using a power transformation for non-normal numerical distributions allows us to hunt down this extreme data and better understand whether a clerical mistake or something more important. |
Data Engineering vs. Science: Before the current trend for data analysis, we were already writing data-centric applications. We never needed to learn desktop analytics, since we were already coding in Java using very large databases. In recruiting, we parsed and analyzed 10 million+ resumes. In Oil & Gas, we handle over 700 million rows of data each month. Science is about sitting at the workbench and making discoveries. Engineering is about taking those discoveries to scale, across the entire enterprise.Non-Obvious Relationship Awareness: Most machine learning projects focus on features that describe each data object. We are more interested in the relationships between these objects. We see data analysis as a journey through chronological data, looking for unexpected value changes. We also focus on finding very small data changes in otherwise dormant datasets. |
Evolutionary Prototyping: Blueprint to create a game changer, develop it incrementally, and rely heavily on end user feedback. This reduces time and cost to get something to market. We use sprints to deliver the most important features first. These go to family and friends for review. Client decides a threshold that needs to be crossed before public release.Arbitrage of Time: Everything competes with everything for a minute of user’s mindshare. Users don’t go to more than 4-5 websites each day, usually Google, a social media site, a news site, and maybe Reddit. Anything else needs to be easy to read and digest quickly. There is no time to analyze a graph or learn how to navigate a website. Value has to be story-based and easy to understand for very busy users. |
Build vs. Buy: Building is costly to maintain and not always the client’s core competency. But buying means all feedback to vendor benefits your competitors. It also means vendor lock. We believe in a hybrid approach – build a motherboard to handle the data communication and “plug-in” vendors on top. Each vendor product will need a data layer that builders can directly access and manage data flows between vendor solutions.Peer-to-Peer: Central intermediaries sell trust. We believe this trust is eroding as data is sold, hacked or used to reinforce biases. We believe Web 3.0 will introduce a “Napster-like” interface, using blockchain to sell goods and services. Each of us will have a unit in our home with all our social media and purchasing data controlled at the individual level. |
|
Data ENGINEER
David is an algorithm builder. He believes
data has a story to tell, if we apply the
right models. His specialty is unstructured data
and previously taught a computer to read resumes
and decide which candidates should be
interviewed for each position. He cut his teeth
in Big Four accounting, building a sales force
automation system that managed $750MM in new
revenue. He also spent time building trading
desk fundamentals and arbitrage tools. |
Marc Ravensbergen
Software Engineer
Marc is an expert in Java, Scala, Python, C, Vala and
frameworks including Tomcat, JSP, Solr, Lucene,
GWT, Jquery, Knockout JS and many others. He has built custom search engines,
database engines, webscrapers, data importers and real-time 3D
visualizations. With fully custom source
code at his fingertips, he can add to or tune these
platforms for any client project. He specializes
in maximizing hardware processing performance,
often running parallel data analysis with 40+
threads, and all calculations processed in
RAM. |
Gulhan Sumer (LinkedIn)
Data MANAGEMENT Office Consultant
Gulhan is an accomplished technology executive with more than twenty-five years of experience in leading IT operations, programs, and projects with a passion for serving the customer.
Gulhan has had the privilege of serving as a Chief Information Officer (CIO) and IT Innovation leader within rapidly evolving organizations.
|
Pete McClintock (LinkedIn)
Data Scientist
Pete is an oil and gas data guru. He spent 11 years at IHS, the leader in energy data. In that time, he developed an expert level
understanding of oil & gas data. Of the thousands of data elements that make up a wellbore, Pete knows the definition and usage for every single field.
Pete has made all of Infornautics success in oil and gas possible. His subject matter expertise combined with predictive algorithms and machine learning have
taken energy analytics to a whole new level. |