Friday, November 18, 2011

Practitioners vs Professors

When one talks about a SE discipline it's good to know whether we're in academic or industrial settings.

Academic engineering is a pretty new thing - once the engineers were educated by apprenticeships not by universities.

I think that in industry the SE is assigned rather specific task in the new product development process - in the defence setting the SE tasks are even mandated.

In academy the situation is different. What is Electronics Engineering Research if not the search of the new electronic technologies - sometimes the advances in physics are needed to advance the electronics but not always - so that's application of science not discovery.

The "hard engineering" disciplines are rooted in the exact sciences. That's not so for SE - the scientific component of the SE borders (some say lays in) social sciences domain - sore spot for academicians that thrive to belong to the Engineering part of the campus near the Physics, Chemistry etc buildings. The same challenge is faced by the Industrial Engineering practitioners and researchers.

So until we don't admit that SE science is inherently social we'll be stuck. Let's be true to ourselves - SE science is the science of social behavior of large groups of technical practitioners and managers (project teams) engaged in solving complex problems for other groups of people (stakeholders) by defining, developing, integrating and putting to use physical artefacts (systems).

Saturday, August 20, 2011

On the discussion what is Systems Engineering



In the still active discussion of what Se is I think we've forgotten the primary tenet of the SE - the functional/structural duality.

"Systems Engineering" is a set of functions in the systems development process - encompasses different subfunctions such as requirements engineering, integration, V&V, technical coordination - see Sarah Sheards 12-role of SE for the reference. The core of functions contained inside SE is rather clear but there are "border skirmishes" in the areas of overlap with Industrial Engineering, PM and - yes - Systems Admin activities in the IT industry.

"Systems Engineer" is a structural element in the company/project organization (organizational position) that may (or may not) be assigned to perform SE functions. The company may call the position SE and may call it something else - for the person that fills the position the functionality and responsibility is what counts.

"Systems Engineer" as a person is something else - it's a matter of abilities and knowledge. David Snowman in his ASHEN model of knowledge distinguishes between various components of knowledge - Natural talents, certified Skills, gained Experience, mastery of the field Heuristics and access to the Artefacts of the professional field - reference books, special equipment (DOORS?) etc.

So the problem is of the degree of overlap between the specific persons's abilities (defined chiefly by herself), the responsibilities and authorities of the specific job position (as defined by the appropriate functions in the organization) and between the field of SE defined rather well by INCOSE or by SE degrees curricula.

We as SEs must not confuse functions, strructural elements and people!

Thursday, July 28, 2011

Systems Integrator's Three Worlds

Austrian philosopher Karl Popper has distinguished between Three Worlds:
  • World 1 of physical object and events, accessible by all human beings given the right perceptions or equipment
  • World 2 of mental objects and events, the inner world of human being accesible by only this human, and
  • World 3 of objective knowledge that comprized the products of human thought - ideas, theories, stories, myths etc.
The World 3 objects are frequently encoded as Words 1 objects - books, articles, drawings, paintings that are commonly known as information artefacts.

I think that the difference between Systems Integrator/Engineer and Specialty Engineers is fundamental and stems from the difference in their World 3 products (as encoded in information artefacts):
  • Specialty Engineers (SpecIs) encode their mental models in executable information artefacts that allow production of physical objects by machines or skilled humans that don't have to share the engineers mental models. Examples - computer code is used by compiler/builder to encode physical computer file that may be run by a computer, mechanical drawing allows production of parts by workers/robots, electrical diagram allows to produce a circuit.
  • Systems Engineers'/Integrators' mental models are encoded in information artefacts (or even communicated without physical encoding building the common understanding -the pure World 3 object) that are meant to create compatible mental models in the minds of other engineers. In that SE/SI are just like to architects, managers and leaders.
The line of distinction between SE/SIs and SpecIs may move with the evolution of technology - once non-exectutable information artefacts may become executable - one example is executable specifications/models that once needed skilled programmer (SpecI) to convert them to the executable code and now this job can in some cases be done by a machine.

It's common that engineers perform both roles of SE/SIs and SpecIs so they have to be educated in these fairly distinct skill sets.

Summarizing: SpecIs produce executable information artefacts meant to be used in production, SEs/SIs produce World 3 objects meant to communicate mental models between engineers and other professionals, without directly leading to the production.

Saturday, June 18, 2011

Test to Learn, Test to Debug, Test to Evaluate

The lot of discussion is going on about the differences between Systems Integration, Systems Testing, Development Testing, Operational Testing etc. A lot of confusion... totally unnecessary in my opinion.

The problem with the above definitions that they reference the types of testing to the methods, responsibility etc. but not to the purpose.

I'd like to propose more natural (for me) distinction - Test to Learn (or to Discover), Test to Debug (or to Solve Problems) and Test to Evaluate (or to Judge).

Learning is the process of acquiring knowledge that previously was not in the posession of the learner. The testing that is focused on learning is always explorative, with the path that may wind and twist and loop. More than that, Test to Learn will never end - there will be always missing knowledge.

Debugging is the process of finding anomalies, investigating the causes and fixing the system. Integration is essentially System Debugging.

Evaluation is the judging the system against some criteria in order to accept or reject some proposition. Testing is the activity that may be a part (but not the whole thing) of either the Debugging or the Evaluation. (System Testing is essentially System Evaluation.

When the managers say "Start Testing. When you'll find severe problems, investigate them and change the system. When the problems are not severe - proceed" they ask to start in the Evaluation Mode and change to Debugging Mode amidstream. Start as ST and change to SI if needed. No wonder that it's difficult for me to explain the difference between SI and ST - the managers frequently perceive the pair-wise encounters of some sub-systems that are done as part of sub-system testing combined with interface checking as the sufficient SI activity and try to drive the project into ST mode without conclusion of all SI activities such as E2E thread testing - they afterwards wonder why the ST phase takes as long as SI+ST phase that was proposed in the initial project plan and rejected as "too long ST".

I'd be delighted to forfeit talking about "System Testing" or "System Integration" and change to "System Debugging" (Subsystem Debugging, Interface Debugging, Whole System Debugging") and "System Evaluation" (or "System
V&V").

Learning, Debugging and Evaluation are not just Testing - there is a lot more that's going on - discussions, arguments, reading, eriting, thinking - but when the Testing is concerned the disctinction of the type of Testing by purpose has its merits and for me is more useful.

Thursday, June 16, 2011

Fourth Secret of Integration - Intelligent Evolution

Some circles still dispute relative merits of the Theory of Evolution and Belief of Intelligent Design. Without entering the fray myself I want to ask: Intelligent? Design? The proponents of ID claim that the complex system we observe in nature are so beutiful that this suggest that somebody has designed them. The Book doesn't support this claim - it specifically states that God has created the world and not designed. So I suppose the ID people just grant us engineers too much credit - we tend to give up design and turn to evolution when faced with complexity not other way around.

Paul Graham in his essays reveals the secret of creative programming (and writing, and building start-ups) - build something that works and then iterate until it's good. Refactoring patterns of Agile Programming suggest the same. Fast Prototyping - once again. We know that the evolution is better than design. It's exactly what Integration is all about - evolving the system until it's fit for the Judgement Day - sorry - the System Testing.

So the Fourth Secret of Integration may be formulated as "The Integration is an Intelligent Evolution of (hopefully) Intelligently Designed Systems until they're fit to be Judged".

Tuesday, June 7, 2011

Third Secret of Systems Integration - There is Always Lack of Knowledge

One more illusion of Systems Engineering is the assumption that the system will behave as required and designed. This illusion is not hubris - in the software world it is possible (at least theoretically) to design the system to do exactly what is required - as long as the logic is the only concern. But when the real world starts to assert its rules of physics, chemistry and more than that - biology, psychology etc. - the lack of knowledge how the system will really behave becomes profound.

Sure that it is possible to know enough to any purpose as long as we're aware that the lack of complete knowledge harbors the eternal promise of surprizes. When the surprize hits it becomes the really wicked problem. The only solution to this eternal lack of knowledge is the eternal learning - and the Systems Integrators have to excel in learning just for their professional survival.

The Systems Integrators always start with the lack of knowledge - for all things equal they usually are not those who have designed this vile creature to be integrated. More that that - there are always parts on the bottom of the hierarchy (the Second Secret nonwithstanding) that are not created by us but are bought from somebody that may not reveal the whole bunch of their properties. Or even the natural things that for them the lack of knowledge is inherent.

The lack of knowledge is not an excuse for Systems Integrators. They have to tell the system's story (the First Secret) and to deal with the networked nature of the system (the Second Secret). So the Third Secret of Systems Integration may be formulated as "There is always missing knowledge that th SI has to discover in order to succeed".


Second Secret of Systems Integration - It's All about Networks

The (infamous) Vee-model of Systems Development Process tells us that the Systems Integration is all about hierarchies - you divide your systems into smaller and smaller pieces until you're able to make or buy the components and then you integrate them into larger and larger pieces until you've got the entire system. It's an illusion.

Really what you see in Systems Integration is a network of things interacting with each other in various ways depending on the external (or internal) excitations. You see matter, energy and data flow back and forth. And frequently your may discern the parts of the network that has much more interactions inside than outside those parts - so you can (for a time being) abstract that part as being an entity on higher level of hierarchy than the nodes of the network.

When you design the system you're free to abstract parts of it, define requirements from these parts, interfaces and input-output relations. That's convenient. In the Integration you are frequiently deprived of this luxury - everything looks like the network that doesn't organize itself naturally into the neat hierarchy of the design. Most surprizes are in those situations when the neat hierarchy shows its (unexpected) network-ness.

So the Second Secret of Integration may be formulated as "Everything is a Network - Hierarchy is an Illusion".

Monday, June 6, 2011

Integrate to Use vs Integrate to Sell

During the conversations with the Systems Integration and Test Discussion Group on LinkedIn , with the INCOSE discussion forums and during the meetings SI Workgroup of INCOSE_IL there were fair share of hair-splitting about what is Integration Test and what is System Test.

The usual gist is that the IT is done to "groom the system" until it works and the ST is designed to prove to the customer that the system works as required.

Something has bothered me until I've realized that I've felt that there's a little bit of hypocrisy in the distiction between the IT for internal use and the ST for the external show - me being for the whole length of my career on the customer's side. Never more. There is no hypocrisy - just plain business. I do my IT and then do the ST to sell you my product. You do your IT to integrate my product into your product and then do ST to further sell your product to the next guy etc etc.

Just like the VAT tax - the last one pays the bill and it is the ultimate customer - the user that has only IT to do and then use the system for the length of its lifecycle. There is no ST here because there are no requirements to falsify.

So that's crucial for the understanding that there is a kind of "Final Integrator" that is the User or someone working on his/her behalf and the type of Systems Integration that is done by the Final Integrator is very distinct from one that's done by any kind of Systems Developer. I propose to name this two types of integration the System Integration to Sell (or Ship) and the System Integration to Use and discuss them separately.

Monday, May 30, 2011

Fable of the days of yore long forgotten...

Once upon the time the giants walked the Earth. They built spacecraft, aircrafts, big ships, submarines, air defense systems. The gold was not a problem - any time when the Russian Bear growled in the woods, the Wise Councils opened their purse.

And then the Russian Bear have thrown the towel and the gold spring has gone dry (almost). Meanwhile the mean gnomes have overcome the world with their fancy software gadgets, and Eastern Dragons took over the commodities market. And then the Internet came - the wild beast that nobody could tame but everybody could ride.

And the giants retreated into these cave called NCOSE and then INCOSE. Meanwhile their legacy was forgotten and replaced by endless scrolls of SE standards and procedures.

Many young knights, peasants and serfs tried to seek their wisdom but they could not understand the language the giants spoke so they invented many languages of their own and the story of Babel tower repeated once again. But the giants laid silently in their cave speaking only once in a while and their voice has grown feebler and feebler until nothing was heard anymore.

I hope that I overreacted in the sadness of the fable - but this is the impression young SEs has of the SEs of old - there is a lot of wisdom but spoken in the obscure language.

Am I right or am I wrong or maybe I just dreaming?

First secret of Systems Integration - Storytelling

mIt's possible that one of the major integration techniques with a psychological twist will be an ability to tell stories. Humans are wired for storytelling, are easily immersed in telling stories and enjoy listening to them. Storytelling may convey a point that is impossible to convey in any other way with the same effectiveness.

How does it connect with the integration? The ability to explain how the system reacts to some event is the hallmark of the Integration Engineer - one has to choose the important information from the wealth of the technical documentation, integrate the mental model and communicate it to the listeners in a simple way that hides the underlying complexityto be able to tell a compelling story.

Now - in order to be a genuine integration technique, the story has to use only available lower-level design information to describe the higher-level behavior without the right to use higher-level information and risk the tautologies.

The ability to tell the story of system's reaction to single event does not guarantee that the whole system story will be told but the trace of that specific event will be validated. Any missing information that prevents the story to flow may be tagged as a possible design anomaly to be resolved until the integrators will be able to tell the story, the whole story and nothing but the story so help them God. In this way the storytelling could be used as a design validation technique.

When the results of the tests on the hierarchy level of interest are combined with the design information on the lower level as the building blocks for the story, effective integration testing techniques may emerge.

And it's much more fun to tell and listen to stories than to read, write and execute endless test scrips so there is no harm will be done if the storytelling techniques will be added to the toolkit of the Integration Engineer.

So the First Secret of Integration may be formulated as "The Integration is the ability to tell a compelling, complete and accurate story about how the system works when used in a certain way".

Friday, May 27, 2011

This is Major Tom to Ground Control

The statistics of this Blog tells me that there are other people looking at its postings. Don't be shy - talk back! I need feedback on my ramblings and I don't want to feel as the Major Tom from David Bowie's Space Oddity - floating up there in a tin can talking to myself on the dead circuits.

The Cool Art of System Debugging

Once upon a time, I've heard an engineer recalling a memory of himself sitting late in the evening trying to widen a hole is one part of some prototype to allow a shaft to pass through - with a file (so low-tech!). When asked by a manager what's he doing he proudly replied - "I'm debugging!".

Given the fact that the word "bug" was first used in the context of aircraft engines problems, long before the famous moth was found in the innards of a computer - it struck me that it is legal to say "I'm debugging mechanical design" or "I'm debugging the electronic circuit".

And then I've realized that it's in the core of Systems Integration - "Debugging the System". To seek problems as early as possible and solve them as early as possible - that's the rub. Now all the wealth of software debugging practices may be applied to the system debugging - with some adjustments - to go beyond the processes into the realm of software debugging psychology, frustrations and other human manifestations of the wonderful world of debugging.
What does it mean - I don't know but I'll try to find out...


Monday, May 23, 2011

What are requirements - anyway? Virtual Reality

Systems Enginnering is infatuated with requirements - requirements elicitation, requirements analysis, requirement management, requirement flowdown, requirements verification etc etc etc... But what are requirements anyway?

According to Wikipedia, "In engineering, a requirement is a singular documented need of what a particular product or service should be or perform. It is most commonly used in a formal sense in systems engineering, software engineering, or enterprise engineering. It is a statement that identifies a necessary attribute, capability, characteristic, or quality of a system in order for it to have value and utility to a user". So that it is - a statement. Simply - statement. The whole complexity of the systems, their use, structure and behavior is supposed to be based on the list of statements. Sounds rather counterintuitive, doesn't it?

The fact is that the requirement are vitual reality. The only real things are how the system is built and how the system interacts with its environment. One may say that a requirement may be worded as "The system shall interact with the user in such and such way" or "The system shall be build of such and such subsystems interacting in such and such way" but it will not escribe the system the system but only its relations with other entities that depend on both the system (under control) and other entities (out of control).

The requiremnents are just like "Now" - the infinitesimal point wedged between the infinite "Past" of system internal structure and the infinite "Future" of all possible uses and interactions during the system's lifecycle. But just like "Now" it has a meaning for us humans - just like we're bound to live in eternal "Now", in SE we're bound to live in the world of requirements because they are the only thing that may be "verified" - falsified or demonstrated (never proved) - and so give us an illusion of control because the number of requirements is always finite.

The system may be "validated" against possible uses - but there is an infinite number of those interactions. The system may be "integrated" from its parts but there is (abstractions aside) an infinite number of internal interactions.

Conclusion - we're doomed to live in the virtual reality of the requirements just like we're bound to live in internal "Now" - remember the past, dreaming about the future but still stuck in the "Now"!




Saturday, May 21, 2011

Complexization

The integration begets complexity - this may be one of the rules of (still wanting) Science of Systems Integration. More than that - one can link the "complicatedness" of the system's structure to the complexity of its behavior.

When number of (stateful) system elements grows linearly. the number of compound system states goes up exponentially. The only thing that may constrain this "state explosion" is the application of constraints on system elements and their interactions - in other words design of system architecture that is "not too dense" - posiible interactions between the elements are limited by "interface control documents", complexity of the behavior of the elements is restricted by careful specification of allowed behavior - and so it's done - in the very limited field of safety-critical devices such as medical equipment.

Other systems don't benefit from such rigorous design discipline leaving the complexity issues unsolved until the integration. One of the causes ( in my opinion) to such "lack of discipline" is the over-reliance on requirements. There is simply no way to specify restrictions on the allowed behavior on one hand and consrain the system structure on the other hand using a finite list of (hopefully) verifiable (or better still - falsifiable) statements. The main symptom of the inherent limitation of requirements specification technique is the phenomenon of the "requirements explosion" that is very similar to the "state explosion" - the number of requirements grows exponentially when the predicted (and not actual as in state explosion) system complicatedness (measured by number of elements and interfaces) grows linearly. Employing standards and regulations to constrain the system design only delays the realization that requirements are exploding - citing a single standard may enter hundreds requirements leaving the feeling that only handful of requirements are added.

The problem is worse in the "systems-of-systems" world. The linkage of two systems may be specified with a limited array of requirement leaving the feeling of simplicity whether the (already there) complexity of the constituent systems is overlooked. The same holds humans are allowed to interact with the system in complicated and often unspecified ways - there is no way to impose restrictions on human behavior using the requirements technique.

The (possible) better way is analyse the behavior patterns and system structures directly using modelling and simulation (M&S) instead of exploding the list of requirements. It's still unclear how to use the M&S in contracts - requirements are so adapted to negotiation and litigation and M&S calls to collaboration and risk sharing - but the future is out there.

So - simple requirements may lead to complicated system and then to complex behavior. Complicated requirements will certainly do so but in this case the "requirements explosion" may be detected and dealt with. Complicated "hidden requirements" masked by simple references will do so without ample warning risking problems in integration.

Conclusion - System Integrators have to look for complicated sets of requirements to detect early the future problems in integration - but most of all have to look for the complicated requirements hidden in the simple references. Requirements are the best thing - for the lawers!

Tuesday, May 10, 2011

Complex Adaptive Integrators

System integration is one of the important stages of the life cycle of any man-made system. This is the phase where the realized ingredients combine to create a complete system. You cannot avoid integration unless you give up to get to the finished product. During the project, the integration almost always starts late and finishes very late, thus causing much delay in the project schedule. Always full of surprise, integration process is always threatening to slip into the chaos. The situation often requires the involvement of "hero" to get the project out of the quagmire.
The integration process is not just a series of tests that make progress from the parts, through intermediate system configurations up to the final product that is offered to the client. Those who see the product as a ultimate result of the integration will be surprised to discover that after they've seemingly finished the job, they are surprised that adding the users or operators opens up the whole new world of problems. And after the phase of human-machine integration is completed, unexpected surprises lurk during the system's introduction into the mix of organization's business.
There is no one right way to do integration: processes differs from one organization to another as well as organizational structures or engineering approaches utilized to perform the. The integration is more than the core of building integration environments, performing testing, reporting results and solving problems – it mixes in complex logistics, management of people and work, management of knowledge and information and even never-ending struggle for resources with project management.
Integration is also a complex social human process full of conflicts including arguments about "who must fix" and "who must pay for the fixes" – this "human issues" are the source of many problems and challenges.
Integration process is the hotbed of adaptation where the participants continuously adapt to each other and to the ebb and flow of the project, changing system configurations and even basic assumptions of the system design. The variety of engineering and management cultures, people skills and characters on the backdrop of mutual interdependence makes the integration process to exhibit characteristics of complex adaptive system (CAS) leading to unexpected behavior and large-scale crises but also to the promise of greater flexibility and robustness.
The research and practice of systems engineering offer a variety of solutions to problems of integration – model-based systems engineering, incremental development, etc. But the published papers consistently ignore the human side of the process. It is possible that to deal with complex adaptive process, we need complex adaptive Integrator that uses diverse information sources, and different types of testing, adapts modeling and reporting styles to the need of current situation, creates "social networks" of developers and users, controls the flow of information between the participants in the project and most importantly adapts to changing situations , changes attitudes, going up and down in the system hierarchy, and never but never ceases to learn.


Users as top-level integrators

In Systems Integration is always important where you stand and where do you look to.
Systems are build from Sub-Systems arranged in Networks of Sibling Systems. So when you're interested in some Systems you may look sideways to the sibling systems, up the hierarchy to the Super-Systems and down the hierarachy to the Sub-Systems. When you're responsible for the development of some systems you're interested in the integration of sub-systems to create your system, looking good toward the sub-system and collaborate with the sibling systems without being hurt and without hurting others.
But there is one place where there is nothing to look up for and that's the place of the ultimate user of the system. The user is interested in getting value from her technologies, to look good toward her peers, bosses and other people. The integration here is social not technological like in lower levels of integration and it's always networked not hierarchical.
Three levels of integration thus may be discerned:
1. Product integration of the technological artefacts
2. System integration of the products and their users
3. Business/Social/Operational integration of the technology-wielding users in their social networks
Every level has its own pecularities and its own chalenges - but almost all SE literature is concerned with product-level integration leaving the two other levels to the users - no so fair!