For: companies who want to further grow revenue by leveraging data.

Who are dissatisfied: with current limitations of data science not running at the enterprise level, 24/7.

Infornautics is a: database and software engineering team that builds client-owned IP, analyzing data across the entire enterprise, in real-time.

That provides: identification of material and anomalous data events, focused on non-obvious relationships between different data points.

Unlike: using a workstation, importing data and running desktop analysis software

Infornautics: builds enterprise-wide data hunting algorithms, sequencing event materiality and proactively notifying leadership via easy-to-understand narratives.


What We Do: Infornautics builds "computational trust" applications you can't google how to build. We are industrial builders of, one of a kind, bleeding edge technologies. Our endgame is to give each client a significant and sustainable competitive advantage in their industries. As data scientists/engineers, our goal is not to organize data, but to locate value and answers hidden in that data using both deductive algorithms and inductive machine learning.
Who Are We : For over 15 years, Infornautics has been building custom algorithms and machine models that uncover actionable intelligence in raw data. We work at the pure skill level, with rights to all work product and IP owned by our clients. Our intent is not to maintain existing technology, but to move technology to the next level. Our starting point is where all other applications stop - with the unknown only being a problem to be solved.
Blueprinting Technology Projects: Most IT projects are plagued with cost overruns and never deliver on time. We have learned that scope creep comes from not locking requirements upfront and getting executive buy-in. Projects we lead deliver on time and on budget. Upfront, we interview a diverse set of stakeholders for feature and expectations identification. We then build a blueprint of the final project and exactly what is expected for Release 1. This blueprint is presented to executives for final signoff before any part of the actual software is started.
Storytelling and Narratives: Time has never been more precious. Machine models are complex and outputs are hard to read. The last step of our data engineering process is to setup reports for executives that capture material events in data. These events are organized and emailed as event narratives each morning. In these reports are links to more detailed analysis, if executive wants to do further research.
Research Areas: A future iteration of blockchain will be more of a game changer than artificial intelligence. All commerce will go peer-to-peer with each of us holding our own data on our own node in our own home.

We believe technology will have a huge impact on K-12 education, but only in support of teachers. The visual cortex is first to mature and can be leveraged for longer-term recall. We would take a page from sales force automation and create a "touchpoint" program to support teachers with real-time current events that relate to specific parts of the curriculum.

We are strong advocates of the master and the emissary approach. Most I.T. projects are left-brain focused and lack an understanding of the overall user environment. Easy-to-use interfaces are critical, since users have so little time.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Artificial Intelligence: We believe humans will have a hard time trusting A.I. without being able to “audit” outputs. A.I. will gain traction as a B2B and back office technology that improves human-to-human interactions. For example, doctors will deliver treatment, A.I. will decide the treatment. Machine models are valuable in data analysis, but outputs will need to be supported by detailed narratives, with a beginning and end.

Anomaly Hunting: Wide deviations in numerical values wreak havoc on data analysis. One extreme value can affect all statistical outputs. Using a power transformation for non-normal numerical distributions allows us to hunt down this extreme data and better understand whether a clerical mistake or something more important.

Data Engineering vs. Science: Before the current trend for data analysis, we were already writing data-centric applications. We never needed to learn desktop analytics, since we were already coding in Java using very large databases. In recruiting, we parsed and analyzed 10 million+ resumes. In Oil & Gas, we handle over 700 million rows of data each month. Science is about sitting at the workbench and making discoveries. Engineering is about taking those discoveries to scale, across the entire enterprise.

Non-Obvious Relationship Awareness: Most machine learning projects focus on features that describe each data object. We are more interested in the relationships between these objects. We see data analysis as a journey through chronological data, looking for unexpected value changes. We also focus on finding very small data changes in otherwise dormant datasets.

Evolutionary Prototyping: Blueprint to create a game changer, develop it incrementally, and rely heavily on end user feedback. This reduces time and cost to get something to market. We use sprints to deliver the most important features first. These go to family and friends for review. Client decides a threshold that needs to be crossed before public release.

Arbitrage of Time: Everything competes with everything for a minute of user’s mindshare. Users don’t go to more than 4-5 websites each day, usually Google, a social media site, a news site, and maybe Reddit. Anything else needs to be easy to read and digest quickly. There is no time to analyze a graph or learn how to navigate a website. Value has to be story-based and easy to understand for very busy users.

Build vs. Buy: Building is costly to maintain and not always the client’s core competency. But buying means all feedback to vendor benefits your competitors. It also means vendor lock. We believe in a hybrid approach – build a motherboard to handle the data communication and “plug-in” vendors on top. Each vendor product will need a data layer that builders can directly access and manage data flows between vendor solutions.

Peer-to-Peer: Central intermediaries sell trust. We believe this trust is eroding as data is sold, hacked or used to reinforce biases. We believe Web 3.0 will introduce a “Napster-like” interface, using blockchain to sell goods and services. Each of us will have a unit in our home with all our social media and purchasing data controlled at the individual level.


 
Server Infrastructure
  • 64 core, 4-CPU servers
  • 512 GB RAM
  • RAID 10 SSD
  • Level3 Datacenter Colo
  • 10 Virtual Machines
Custom Tools
  • Database Engine
  • Webscraper
  • XPATH Engine
  • Classifier

David Gossett (LinkedIn) (Twitter)
Data ENGINEER

David is an algorithm builder. He believes data has a story to tell, if we apply the right models. His specialty is unstructured data and previously taught a computer to read resumes and decide which candidates should be interviewed for each position. He cut his teeth in Big Four accounting, building a sales force automation system that managed $750MM in new revenue. He also spent time building trading desk fundamentals and arbitrage tools.

David has unique skills to work with both I.T. and business leadership creating blueprints for each project, after intense scoping and expectation setting. This architectural process virtually eliminates scope creep and projects going over budget. He currently specializes in machine learning and is matched by few in feature extraction skills. He has been doing data curation for well over a decade and knows how to prepare raw data for the machine models.

Marc Ravensbergen
Software Engineer

Marc is an expert in Java, Scala, Python, C, Vala and frameworks including Tomcat, JSP, Solr, Lucene, GWT, Jquery, Knockout JS and many others. He has built custom search engines, database engines, webscrapers, data importers and real-time 3D visualizations. With fully custom source code at his fingertips, he can add to or tune these platforms for any client project. He specializes in maximizing hardware processing performance, often running parallel data analysis with 40+ threads, and all calculations processed in RAM.

Marc has written tools that recently analyzed 100 million oil and gas records and his web scraper has pulled in well over 100 million data records from the internet. He prefers to write clean, well designed code that is easy to maintain and reuse in future projects.

Gulhan Sumer (LinkedIn)
Data MANAGEMENT Office Consultant

Gulhan is an accomplished technology executive with more than twenty-five years of experience in leading IT operations, programs, and projects with a passion for serving the customer. Gulhan has had the privilege of serving as a Chief Information Officer (CIO) and IT Innovation leader within rapidly evolving organizations.

He has a track record of effectively translating business strategy into specific IT objectives and has the creativity to make the conceptual concrete, relevant and impactful. He has led teams serving public, private and startup companies within the Financial Services, Consumer Lending, Oil and Gas, Retail, and Consulting Services.

Embracing disruption, Gulhan’s passion is exploring new ideas, products and business models as refined through an iterative process of build, test, and learn.

Pete McClintock (LinkedIn)
Data Scientist

Pete is an oil and gas data guru. He spent 11 years at IHS, the leader in energy data. In that time, he developed an expert level understanding of oil & gas data. Of the thousands of data elements that make up a wellbore, Pete knows the definition and usage for every single field. Pete has made all of Infornautics success in oil and gas possible. His subject matter expertise combined with predictive algorithms and machine learning have taken energy analytics to a whole new level.

Pete is highly analytical with ability to assess, evaluate and recommend solutions to improve performance quickly. He has the ability to draw conclusions, themes and trends from data analysis, communicate results and influence decisions at the highest levels.

 
Client Relationships
Increased Productivity
Sales Intelligence
Service Quality
Client Wealth
Business Intelligence
Stock Fraud
War for Talent
Job Descriptions
Stock Trades
Contextual Search
Customer Absorption
Place to Space
Information Mavens