Keywords

1 Introduction and Background

There are several active efforts in large scale training system acquisition currently active in the Department of Defense community. The first of these is the Army’s Priority for the Synthetic Training Environment (STE) [3], which is putting into practice the requirements from the Army Learning Model 2015 [1]. The second of these is the Learning Continuum And Performance Assessment (LCAPA) [4] as part of the Navy’s Sailor 2025 initiative [8, 10]. Each of these efforts represents a large-scale acquisition of modern training technologies with the intention of boosting readiness, based on competencies. The Army demands of its Soldiers a broad set of competencies (e.g. Many different ground operations maneuvers, tasks, procedures), while the Navy requires a deep set of competencies (e.g. disassemble and reassemble of a reactor), but both rely upon competency assessment for the assessment of unit and task readiness.

There is a need for the representation of competencies to be transferable at all levels; an Army Soldier moving Units, an Army Squad moving Sections, an Army Battalion moving between Regiments, a Navy Destroyer being assigned a new Carrier Group, a 1st class sonarman being assigned a new ship. The receiving unit should know about the abilities of the incoming group to the greatest extent possible to ensure the continuity of operations. Further, transfer between services should be viewed similarly. Further, those who exit service can benefit from a model of their abilities being transferred to potential employers. For those in the reserve units, it is helpful to have a model of their abilities available for military duty (e.g. a Reserve Soldier who owns a welding business should be considered for relevant positions if recalled, alongside official expertise and training).

This type model is not only useful for Warfighters switching between groups, but also for the recommendation of training content at the various levels. As examples, consider an individual who is lacking in a single skill required for next promotion, a unit lacking an individual with specific training (i.e. heavy gunner, or cryptographer), or the discovery of a new required skill now needed for an individual area (i.e. IED detection/disposal, during the war in Iraq). These deficiencies can be targeted if relevant recommender systems can highlight the weaknesses to decision makers.

Each area may assess competency in a different manner. As an example, the 75th Ranger Regiment and Delta Force should be enabled to differ in their assessment of what it takes to be “jump qualified”; these standards are likely higher than the standards for the Airborne Soldier. Similarly, the assessment logic to determine jump qualification can and should be shared between the two areas. Additionally, what it means for an individual to be “jump qualified” should differ based on whether it is an individual assessment (“did they pass jump school?”) or an assessment of the unit (“did each member pass jump school?” and “does the unit have a jumpmaster?”), and may vary based on the organization (“have they performed at least 1 practice jump together?”).

These areas of competency serve as the basis for the recommendation of learning resources. As such, they require a method and standard for interchange such that multiple recommendation engines are available to service the needs of the learner, or that one recommendation engine can service multiple communities of learners. This paper presents suggestions for initial technologies to identify the opportunities for standards and interchange in association with the Adaptive Instructional Systems (AIS) IEEE group.

2 Traditional Educational Model

Generally speaking, the traditional educational model is relatively lacking the problems of competency modeling and interchange that the rest of this paper will be discussing. The problems of this paper are relatively new problems which are brought on as a result of new technologies. It is worth reflecting on the traditional educational model, how the problem of competency and competency interchange are not particularly relevant, and the things that make them relevant to the modern day.

Firstly, there is primary education (K-12 in the United States). Students are educated according to a relatively internally consistent model which is somewhat resistant to change by the nature of its throughput. As a specific example, consider a school which must teach single-digit addition (SDA), multi-digit addition (MDA), single digit subtraction (SDS) and multi-digit subtraction (MDS). Two alternative curriculum are presented: SDA->MDA->SDS->MDS (Curr1) and SDA->SDS->MDA->MDS (curr2) may be equally viable; administrators choose one or the other.

Provided that students learn in the order prescribed, there is little problem. Transfer students from an alternative curriculum present an issue, but they represent less than 5% of the total volume and can be individually attended to – especially if they transfer at the beginning or end of an instructional block. Absenteeism presents a similarly problem with similar solution. However, a change from Curr1 to Curr2 midstream is disastrous – forcing the entire cohort of students to learn MDS without SDS.

At the Curr1/Curr2 level the absent and transfer students represent a relatively small portion and relatively minimal problem. The same “spot fix” solution is applied at a larger level with the change of school zone (region, state, country, etc.) – students must be migrated among grades or remedial/basic/advanced levels of the same content. The basic solution to that problem is the implementation of nationwide (or continent-wide, as is the case in the EU) standards of knowledge per year.

When the primary education system is the State education system, instructed primarily with textbooks (or digital equivalents), the solutions of spot-fixing, advancing or holdback are possible; this is appropriate for “Know What” knowledge [5]. The new world, however, demands knowledge workers – workers who are primarily valued based on their ability to interact with concepts and formulate solutions; “Know How” knowledge [5]. This forces individuals into taking charge of their own educational training, educational systems which inherit piecework-educated students, and employers receiving little proof that the employee can perform the job.

2.1 Requirements for Competencies and Recommenders

The knowledge workers of today have a mishmash of educational experiences which is poorly represented in a resume. The educational system and employer both look at a resume which says things like “Computer Science degree, 5 years’ experience networking projects, 3 years hobbyist website developer” and have difficulty discerning whether this person can perform the “make our website have a database backend” task or needs the “Databases 101” class. Naturally, both of them can ask – but this requires an accurate self-assessment understanding of the knowledge (or lack of it) on the part of the student/worker, and an accurate interpretation of the answer by the employer. This knowledge requirement on behalf of the employer prevents moving the task to a cheaper job category. Information on what the individual knows and what the individual can perform is required in order to gain benefit for the individual, employer, and student.

3 Military Educational Model

Similar to the industrial educational model, above, the military educational model has the ability to operate as a “top down” structure. Organizations such as the Training and Doctrine Command (TRADOC) and Naval Education and Training Command (NETC) can dictate curriculum to the subordinate schoolhouses. The resultant educational programs, however, can be very different from each other, which reflects the different services and missions. As an example, the Navy traditionally trains for deep knowledge using an “A school”/“C school” model thoroughly complimented by time spent at sea training under more senior personnel; the 3rd Class Petty Officer nearly always has frequent interactions with a 1st Class Petty Officer. The Army has a model for training broad knowledge through training on individual skills (Land Navigation, Marksmanship, etc.), tactical drills (Break Contact, Clear Room, etc.), and leadership training. While both organizations took a top-down dictation approach to what information and skills the individual needed to have, the resultant models are remarkably dissimilar.

Somewhat unlike the K-12 industrial schoolhouse model, however, Warfighters (Soldiers, Sailors, Airmen, Reservists, etc.) encounter the real world and gain an incredibly diverse set of experiences with mentors before being cycled back into formal education. Further complicating matters, an individual switches units, deployments, and groups with relative frequency throughout a typical military career for a variety of reasons, not the least of which includes needs of the nation. This problem is magnified for Reservist Warfighters, who make up approximately 20% of the total Warfighter population, and do things like “own a welding business for 8 years” between official tours of duty, leaving the expertise and ability on the table for the majority of deployment groups.

3.1 Requirements for Competencies and Recommenders

On the surface, this problem seems simple to solve – the military has significant ability to prescribe training to individual Warfighters. The military can bin school, ensure individual school compliance, mandate daily training activities, set service standards for individual Military Occupational Specialties (MOS) and attempt to enforce a doctrine that the Warfighter get their training credit from the military system. A significant amount of military training is currently performed in this manner – using certificates of completion and badges to authenticate training and qualification.

Naturally, the military runs afoul the same problem as the secondary educational market. The Warfighters of today have a mishmash of educational experiences which is poorly represented in a resume. The military promotional system and educational system both look at a deployment history which says things like “Sonarman MOS, A/C school, 4 deployments with Carrier Group East Coast” and have difficulty discerning whether this person can perform the “find Russian submarines” task or needs geo-specific threat refresher training. Naturally, both of them can ask – but this requires an accurate self-assessment understanding of the knowledge (or lack of it) on the part of the Warfighter, and on behalf of the receiving command. Needless to say, lives depend on the correct answer, which rightfully biases the military to over-train on any tasks critical to the job performance.

4 Need for an Updated Industrial Educational Model

The current educational experience system works on the back of the “education+yearsExperience” or “schoolhouse+deploymentsServed” metrics for knowledge works and Warfighters, respectively. This has a background of no documentation of informal learning activities such as “website hobbyist” or “welding business” in the above examples. It has limited documentation of On Job Training (OJT), without individual tasks assigned or completed. Generally, this model is opaque to the end user of the employees’ labor.

At the individual level, this results in lost opportunities where skills could be used or fields could be switched; the above welder-Warfighter should perhaps be reclassified as a Combat Engineer regardless of prior training, the programmer as perhaps a Full Stack Developer. The individual would be better served if his competencies and abilities were organizationally represented to the other institutions, rather than personally represented in the interview forms. Further, if the individual were able to see representations of his own knowledge, or lack thereof, his training could be optimized towards the goals of the other institutions.

At the educational level, the lack of transparency of actual knowledge results in educational waste as individuals are given instruction that they do not need or are unprepared for. Students who where are already trained in one subject end up repeating training because of the lack of knowledge at the educational level. As a concrete example, a retired Navy Electronics Technician (ET) has significant expertise on circuit diagnostics, but a class on circuit diagnostics is required to meet university requirements for a degree in Electrical Engineering. A Sailor with the Electrical Engineering degree may be assigned to “A”/“C” school for Electronics.

As the level of the employer, the existing educational model doesn’t answer the basic questions of how the individual can be useful to the organization or what training they would be most suited for in order to be more useful to the organization. The organization ends up receiving individuals without knowledge of other credentials; such as recruiting for someone with a degree in Electrical Engineering without the knowledge that this person has prior service as an ET. Alternatively, the employer receives an individual without a mapping of the individuals’ expertise.

While this model can, and is, manually corrected for errors from Human Resources offices, it is wholly insufficient, as it does not serve the individual, the educational institution, or the employer, without intervention. A better model would be to track the relevant learning interests of the individual, and to provide these to the interested parties.

5 Features of an Updated Model for Competency and Recommender Systems

If we have established that the existing system requires replacement or update in order to enhance the productivity and effectiveness of interacting organizations, the question of “in what manner?” remains. The top-down enforcement of grade-by-grade year-by-year regional standards is reasonably effective for the problems of K-12 education, but somewhat insufficient in its execution in the other organizations to which it must interact.

5.1 Ontology of All Knowledge

One of the things that must happen in order to have a representation of a persons’ knowledge is that there must be a representation of which it is that people can know. From this, the knowledge of the individuals can be mapped onto the representation. There have been organizations which have tried to create an ontological mapping of all of knowledge that a person may be inclined to possess [2]. This is possible, and even relevant, in certain domains; consider that the mapping of all K-12 knowledge which was conducted in order to establish the common core standards.

In one manner, these efforts are laudable – they represent a direct path to the goal. In small settings, it is possible to create such mappings, and it is possible in larger settings with concerted manpower. Inevitably, however, such an ontology frays at the edges – where does such an ontology begin to place things like the soft skills possessed by management? The development of new fields of knowledge? The combination of existing fields? At what grain size? What does one do with a mapping that an individual is possessed of knowledge of “Math”?

Whatever standard is created must, by its nature, be flexible enough to encapsulate both all knowledge and all possible knowledge. Statements of the knowledge of individuals must be agreed upon by the parties interested in such statements, rather than dictated from above. Standards should allow for the interchange between groups dealing with similar ontologies or ontological frameworks, which also allows the individual organizations to expand, contract, or redefine a shared vocabulary as needed by business or human resources processes.

5.2 Trusted Sources

Existing models of competency and accreditation exist in the form of “trusted sources”. A high school degree from a US state carries the weight of that state – agreements or disagreements can be undertaken at that level as to whether this is trusted, but the worth of the diploma is determined above the level of the high school. Similarly, a degree from a secondary institution carries the weight of that institution – the University of <State> or Trump University or University of <Nation> – and individual organizations must decide which of these items is trusted. Similarly, certain organizations, such as Accreditation Board for Engineering and Technology, exist as centralized authorities of quality [9]. Organizations may choose to trust the trusting authority, rather than evaluate and establish their own basis of trust of the credentials.

The new model of educational credentials must follow in this footing. As an example, consider the YouTube and Khan Academy platforms. In one context, they both may be trusted – simple knowledge that an individual has watched 30 videos on the subject of Dishwasher Repair may be sufficient for the task envisioned (repair a dishwasher). However, a Khan Academy higher standard which has coupled assessment (have they repaired a dishwasher successfully?) may be needed for a more advanced task (train someone to repair a dishwasher). Further, it is possible to blend both YouTube and Khan Academy experiences into a unified credential issued by a trust authority. To use a military example, knowledge of the Littoral Combat Ship (LCS)-class webcam-based shipwide remote monitoring system may be sufficient for a cook or Action Officer, while engineers which use the system to troubleshoot complex problems may be held to a higher standard which includes assessment. The maintainers of such a system may be held to a higher standard yet.

Different sources may be trusted at different levels for different tasks, standards, and systems for competency and recommendation. These sources of trust should be flexible enough to accommodate variations in the standards of the task. The vending of trust, independent of granting of credentials, must also be supported in emerging standards, as this practice has already been widely established for many educational institutions. Trusted sources can be used as the basis for the recommendation of automated systems.

5.3 Custom Assessment Queries – Individual Level

The basic thing that is needed at an individual level is an answer to the question “does this person know X?” and “what evidence do I have?”. At the individual level, queries such as this are required. These naturally feed placement and recommendations based on existing knowledge. Further, systems should be flexible enough to allow for different standards for the query source – a 70% performance may be good enough for some organizations and tasks, but insufficient for others. As a concrete example, an 18-year old Army recruit must have a 16 min, 36 s time for running 2-miles, which an Army Ranger must complete the same task in 13 min. At the human resources usage level, developed standards must have the ability to discover the potential of a 12 min 2-mile run in a recruit; this individual may be a good candidate for Ranger School.

5.4 Custom Assessment Queries – Group Level

A group-level query is likely a collection of the individual level queries. There are multiple ways to phrase such a query. Consider the query “is the unit jump ready?”. This query has multiple component queries, which may vary among divisions and Warfighter services:

  • Is everyone in the unit jump qualified?

    • Have they been training in parachute drills, bag packing, completed a number of jumps, etc.

  • Is everyone jump ready?

    • Has each individual complete a jump within the last number of months?

  • Is there a jump master?

    • Is at least one person in the unit a certified jump master, which has its own set of standards/competencies?

Consider an answer to this query of ‘no, this unit is not jump ready’. The natural follow-up queries are ‘in what way is this unit lacking?’ and ‘how can the deficiency be fixed?’ and ‘what is the fastest way to fix the deficiency?’. Following the example, the answer may be as simple as “Indivdual2 needs to do a live jump” or as complex as “this unit has only one individual with 20% Jump Master Training, it is best to assign another Jump Master”. This knowledge provides information to recommender systems.

5.5 Custom Assessment Queries – Groups-of-Groups Level

In the military, the gold standard assessment of knowledge is “readiness level”, which is provided to very senior leaders. At its highest form, “readiness level” represents an answer to Congress/President to the question of “if we had to go to war tomorrow, how ready for the task are we?”. It is intended to be an honest assessment of military capability. In the current manner of business, the readiness of units rolls up into divisions, brigades, etc. into a total assessment of capability. The civilian world has equivalent ‘readiness level’ in items such as doctor/nurse teams, oil rig workers, or software development teams. Whatever standards and recommenders exist need to support roll up of teams-of-teams assessment, the recommendations systems existing to provide solutions to gaps in the assessment.

6 The Need for Models of Competency and Recommender Systems

Much ink has been spilled about 21st century competencies [6], the new normal of knowledge work [11], bridging the gap between high school, college, military, and workforce, and within schools, colleges, military, and workforce [7]. However, these are not the problems of tomorrow, they are the problems of today. The existing system is not serving the individual, employer, or educator; it needs to change. At the core of this change is the representation of both individual and group “know what” and “know how” [5]. Making this change has significant benefit – it makes the individual more transportable across the workforce, it limits educational waste, and it helps employers to place individuals and groups in areas where they can prosper. Technology has created a problem where individuals are forced into a path of lifetime learning and educational experiences, but fortunately this is a problem which technology can also solve and optimize through the representation of interchangeable competencies and personalized educational recommendations.