business innovation, technology innovation, artificial intelligence, expert systems, semantic technologies, cognitive systems, program management, software architecture, outside the box, patent holder, electronics, security, radio technology, wifi systems, zigbee protocols, thin film circuits, flexible screen technology, software modernization, rapid software development, systems design, systems analysis, lunch, technology frameworks, European regulations, US regulations, Banking, Brokerage, Insurance Healthcare, defense contractor, SOX, NAIC, Basel, Solvency II, UK banking regulations, product design, system implementation, object oriented systems, form based systems, deduction, induction abduction, inference engines, fast cars, system consolidation, agile systems, complexity, complicated system, rationlization, remediation

Governance, Risk and Compliance–Semantic Computer Systems Development

What if something is unknowable? By John R. Coyne, Semantic Systems Architect An old adage says “It’s not what you don’t know that will hurt you, it’s what you know that isn’t so.” Essentially, the difference between dealing with complexity and complication is one of the unknowable versus the knowable.  Typically, in old-school computer systems development, modeling comprises linear processes of formal reductionism to model individual elements or components of data and process flows with the typical decision junction switching directions or bridging of old school-style ”swim lanes.” As anyone familiar with the process knows, this kind of modeling can get very complicated very quickly, especially when one encounters after months of discovery the “yeah, but” anomaly to the equation that has been set up.  Part of this has to do with the inherent non-linearity of actual operations in the real world.   In the days of predictable outcomes in simple behavior models that would encounter simple modifications when simple changes took place, our attempts at orderly discovery of workflows were easy. These models usually operated in a single framework or context of activity–the factory floor, the accounts department, the typing pool, etc. The keyword, of course, is “simple.”  But advances in technology, increased transaction speeds, multi-dimensional interests and Web-scale interactions have made single-framework models and the concept of Business Process Modeling Notation (BPMN) tools not only redundant, but inappropriate for dealing with complexity.  BPMN deals with reductionism and the knowable, and is therefore perfectly suited to defining complicated processes; in other words, the knowable.  But it starts with the premise that something is knowable. The hubris with which systems are addressed today states that “If I can know the state between A and B, and then B to C and C to D, then, I can trace A to N functions, map them and the system can be knowable, definable and hence controllable.’   Usually, today’s systems deal with single frameworks or contexts of operations.  However, like life, business throws the occasional curve.  And that curve usually comes from a framework not previously considered. It’s the unknown risks, or “what you know that isn’t so” is what can cause the most damage These curveballs are–for most businesses–equated to the unknowable, while the unknowable equates to risk.  The appetite for risk is usually a factor of ”known risk,” but it is  the unknown risks or “what you know that isn’t so” that cause the most damage, as can be seen from the recent systemic collapse in financial institutions that caused an avalanche of unintended consequences resulting not just in financial problems, but social upheaval, personal catastrophe and even sovereign collapse. The linear approach trap raises the question of which approach helps to detect the unknown risk, along with the proverbial “what you know that isn’t so.” After 40 or so years of continuous research and development in systems design and programming tools in the artificial intelligence arena, a level of maturity has evolved that facilitates the development of systems that deal with complexity.  As a result, there are more complex (unknowable) than simply complicated systems.  One outcome has been the separation of the relationship between objects and concepts and the flow of activity between and across them. No “if, then, else” statement required Using an example from the financial services industry illustrates the simplicity of the concept in context of the seller and buyer:   A mortgage (an object) requires (a relationship) top credit (another object or concept)–there is no ”if, then, else” statement required.  The process of determining whether the goal of obtaining a mortgage is to be met is dropped into an inference engine that determines the goal and the requirements for its achievement.  It discovers the dynamic activities that go into achieving the goal should the ”top credit” requirement be met, or stops the activities should the goal not be met. Now add in the complexity of regulatory controls and minority rights, and the computer systems to support the production of the paperwork. Then add the various underwriting and risk models to be addressed and the mitigation of the risk by breaking the product (the mortgage) up into interest-rate derivatives, and cross-border jurisdictions, etc. In this way, a simple transaction becomes a complex web of inter-framework activity.  (And if you don’t believe that, try ascertaining who actually owns your mortgage!) To be sure, the world is more complicated.  Change is happening at an exponential rate.  But what can be done? Start by trying something different for a change. Looking at Governance, Risk and Compliance (GRC) and using the idea of simple concept (object)/relationship/concept model, we can begin with modeling topics of governance (risk, risk appetite, policies) and external regulations (compliance).  Initially, we can start with topics at a high level.  Duty of care (Topic A) is a topic that we will focus on for the time being.  Topic B could be policy and risk tolerance. The regulatory and policy models are designed at a gross level.  A first pass at interfacing to the sub-systems and data in the legacy environment is achieved through a service-oriented architecture (SOA) approach. This is a non-invasive and non-destructive method of creating new systems without disturbing day-to-day business.  Once again using the financial services industry as an example, these legacy systems may include point solutions for anti-money laundering, suspicious activity reporting or liquid coverage ratio requirements. The point of the model is not to replace them, but to assure that they are doing the correct systemic job. Exposure to risks will be uncovered very quickly. In this case, topic A has two factors that do not satisfy the goal of the regulation. These become knowable, definable and fixable (at whatever layer of detail). Topic B has one missing variable.  But the chain reaction moves the non-compliant nature of the problem up to the topic.  Now you know that you cannot fully satisfy the ‘”duty of care” topic (A) and cannot fully satisfy your internal policy. Not satisfying a regulatory requirement with all its ramifications (fines, imprisonment, loss of public trust) may well be more important than not meeting only one trace line in your governance policy.  Alternatively, they may be related (more on this later).  But now you know what you have to do.  As the model increases in complexity, it will expose more gaps, but as these gaps emerge, they will, of course, become knowable and therefore fixable. The question is whether this same approach is be viable for dealing with multiple frameworks. While this is a powerful start, it is indeed only dealing with a single framework. Discovering relatedness and interdependency Each framework has been modeled, and the behavior of each is well-known.  The name of the topic is, for instance, standardized in a business, data and/or process ontology.  In the case of the above example, topic A refers to duty of care. Since we are not running a process, but just the relationships among (things) them, we can run our models against our inference engine and discover that there is a linkage among all three frameworks. In framework one, the duty of care may have been to apprise the buyer of all the risks related to the product being sold and mapped to a regulation dealing with consumer protection (which is fully discoverable in the model’s knowledge base). The second framework may concern stakeholder protection.  In this case, the policy decision may be a risk tolerance or risk exposure relationship, such as ”This is a $30 million mortgage, and it has put us over the risk coverage limit we set for the month.”  This is mapped to an internal policy, and also mapped to regulations regarding the permissible acceptance or denial criteria. The third framework is the operations and technology framework, and the duty of care here may be the protection and privacy of the data used in the decisions, its transmittal and traversal across and between networks. We can now determine something we did not know in the past, and may never have known until it was too late by finding both an interrelatedness and interdependency between frameworks that is essential to both external and internal compliance.

Advertisements
Artificial Intelligence Computer Systems Designer, Technology Innovation, Software Modernization

Semantic Computer System Development Programming–A Primer

A Primer on Programming–The Basics, History, Design & Components for Non-Technical Business Executives

By John R. Coyne, Semantic Computing Consultant

In traditional programming and the Systems Development Lifecycle, a process of gathering information from users to describe needs is translated into a systems analysis, confirmed and then codified—thus producing a System Design.

Then, an architecture or framework to support the system is created.

This will include

  • Infrastructure
  • Software
  • Choice of programming language
  • Operating system
  • Data elements
    • These are called from time to time and potentially modified

Thus, this architecture is then the support system for the system design and all its components.

Programmers perform two fundamental functions:

  1. They express the user(s) needs in terms of statements of computer functions.
  2. Embedded in those computer functions are the methods that the computer will need to perform in order to execute them. These are:
    • descriptions of data to be used
    • networks to traverse
    • security protocols to use
    • infrastructure for processing

(Summarized at the most abstract level, these could be described as: Transport, Processing and Memory)

This intricate association of descriptions of 1) what the system should do, and 2) how it will do it relies on the programmer and system designer to perform their tasks with precision.

In many cases both will rely on third-party software, the most common of which is a proprietary database.

These proprietary databases come with tools that make their use more convenient.   (That is because these databases are complex and, without the tools, the systems designers would have to have intimate knowledge of how the internals of the database systems work.)

Thus, the abstraction allows the systems builder to concentrate on what the user wants, versus what the database system needs to perform its functions.

In the early days of computing, programmers would have to make specifications of the data they needed, test the data and merge or link other data types. Now, databases come with simple tools like “SQL” that allow programmers to simply ask for the data they want. The database system does the rest.

Programs written in programming languages are also abstractions.

 

How Computer Programming Developed

In the early days of programming, programs were written in machine language, which was an arcane art blending both engineering and systems knowledge. Later, assembler languages were developed as a first level of abstraction. These were known as second generation languages. Even these languages required specialized skills. The next leap was with third-generation languages—the most common of which was COBOL (short for Common Business Oriented Language, which was developed so that people without engineering skills could program a computer).

With machine languages, there was no translation function to have the computer system understand what the programmer wanted it to do. (With assembler languages, there was a moderate translation, but they are so similar to the machine language that little translation is needed).

In third-generation languages, the concept of a “compiler” was created. The compiler takes a computer language that is easy to program in and translates it to a language the computer can use for processing the requirements. During this generation of programming, many third-party tools were developed to aid systems designers in the delivery of their systems and thus a whole industry was born.

Not surprisingly, computers became more complex and, over time, so did the systems that people wanted designed. This complexity drove systems to become almost impossible to understand in their entirety. Eventually, instead of changing them, systems designers simply appended new programs to the older systems and created what is sometimes termed “spaghetti code.”

Eventually, something had to change. Now, after years of research based on artificial intelligence techniques, new tools have emerged that enable a new generation of programming that allows the computer to determine the best resources it needs to do what is requested of it. The science in this is not important. What IS important is that now, the original process of determining what the user wants can be separated from how it gets done.

In semantic modeling, no programming takes place. Rather, a modeler interviews subject matter experts to determine what they want to happen, the best way for it to happen and the best expected results.

Semantic modeling is constructed much like an English sentence, (which is one reason for the term “semantic”). There is a subject, a predicate (or relationship) to an object of the sentence. Like the building of a story, or report, these “sentences” are connected to one another to create a system. Also like creating a report, “sentences” may be used over and over again to reduce the amount of repetitive work. In semantic modeling, these “sentences structures” are called concepts. Concepts are the highest level of abstraction in the program’s “story.”

Like a sentence, the requirements of the system can be structured in near English grammar-level terms.

For instance:

“A Passport (subject) requires (predicate) citizenship (object).”

(The concept that we are dealing with could be “international travel.” This demonstrates the linkages between coding “sentences.”)

“International travel – requires – a passport.” and thus, as has been seen, “A passport – requires – citizenship.”

To expound on our grammatical analogy for programming the system, the same terms delineating a “passport” could be used for “checking into a hotel”:

“Hotel – requires – proof of identity.”

(Identity as a concept can re-use the “passport” sentence.)

“Passport – is a form of – identity.”

(Thus, the speed of development is greatly improved because of the re-usability.)

Also like a sentence, the terms can be graphically represented as a hierarchy—much like sentence deconstruction (diagramming) we learned in high school.

Notice that the terms do not describe how such information is to be found, what order of precedence they have, or how the system is to process such statements. In the “separation of concerns,” the new semantic systems use another mechanism to process the data known as an inference engine.

The inference engine is a logic tool that determines what is needed to accomplish the semantic concepts. The goal of the inference engine is to solve the computing requirements.

Of course, like the databases that have been built, semantic systems come with tools that allow the business user and modeler to describe what the system should be doing, without the need for intimate knowledge of expert systems or artificial intelligence techniques. They simply model. Like the aforementioned SQL statement, the computer takes care of the requirements for satisfying the system requirements.

Underneath all this is the usual figurative plumbing found in computer programming. There are networks to be traversed, data to be called and transformed, reports to write, and computers to process the requests. Today, all of these are now simply services that have been well-understood and are supported by a whole industry of third-party suppliers with proprietary products and an even greater universe of engineers supporting open standards, and even free software available to do these tasks.

 

business innovation, technology innovation, artificial intelligence, expert systems, semantic technologies, cognitive systems, program management, software architecture, outside the box, patent holder, electronics, security, radio technology, wifi systems, zigbee protocols, thin film circuits, flexible screen technology, software modernization, rapid software development, systems design, systems analysis, lunch, technology frameworks, European regulations, US regulations, Banking, Brokerage, Insurance Healthcare, defense contractor, SOX, NAIC, Basel, Solvency II, UK banking regulations, product design, system implementation, object oriented systems, form based systems, deduction, induction abduction, inference engines, fast cars, system consolidation, agile systems, complexity, complicated system, rationlization, remediation