Difference between revisions of "Validated Learning Quarter"

From CNM Wiki
Jump to: navigation, search
(Concepts)
(Concepts)
Line 17: Line 17:
 
#'''[[Experiment]]'''. A procedure carried out to support, refute, or validate a hypothesis. [[Experiment]]s are commonly undertaken in order to make a discovery, test a hypothesis, or demonstrate a known fact.
 
#'''[[Experiment]]'''. A procedure carried out to support, refute, or validate a hypothesis. [[Experiment]]s are commonly undertaken in order to make a discovery, test a hypothesis, or demonstrate a known fact.
 
#*[[Pilot experiment]] (also known as [[pilot study]] or [[pilot project]]). A small-scale preliminary [[experiment]] conducted in order to approximate feasibility, time, cost, adverse events, and improve upon the [[feasibility study]] design prior to performance of a full-scale research project.  
 
#*[[Pilot experiment]] (also known as [[pilot study]] or [[pilot project]]). A small-scale preliminary [[experiment]] conducted in order to approximate feasibility, time, cost, adverse events, and improve upon the [[feasibility study]] design prior to performance of a full-scale research project.  
#'''[[Data research]]'''. The systematic investigation into and study of [[data]] and [[data source]]s in order to further establish facts and reach new conclusions.
 
#*[[Qualitative research]]. [[Data research]] that looks for [[qualitative data]].
 
#*[[Quantitative research]]. [[Data research]] that looks for [[quantitative data]].
 
#*[[Ad hoc query]]. The ability to create a one-off, "on demand" report from [[BI software|BI]] or [[data analytics software]] that answers a specific business question.
 
 
#'''[[Testing]]'''. Taking measures to check the performance or reliability of something, especially before putting it into widespread use or practice, or somebody, especially before hiring. Someone who or something that conducts [[testing]] is called a [[tester]]. Someone who or something that is tested is called a [[testee]]. If a [[testee]] is a [[human being]], this testing is called [[human testing]]. The [[testing]] that utilizes one or more artifacts such as a [[prototype]] or [[product]] is called [[artifact testing]]. The [[testing]] that utilizes no artifact is called [[natural testing]].<blockquote><table class="wikitable" width=100% style="text-align:center;"><tr><td rowspan="2" style="background-color:#fff;">'''[[Testing]]'''</td><td colspan="2">'''[[Tester]]'''</td></tr><tr><th>[[Human being|Human]]</th><th>[[Machine]]</th></tr><tr><th>[[Natural testing|Natural]]</th><td>[[Observation]], [[oral examination]], [[open voting]], [[interview]], etc.</td><td>[[Measurement and signature intelligence]], [[automatic data validation]], etc.</td></tr><tr><th>[[Artifact testing|Artifact]]</th><td>[[Questionnaire survey]], [[ballot voting]], [[user test]], etc.</td><td>[[Computer-based exam]], [[online survey]], etc.</td></tr></table></blockquote>
 
#'''[[Testing]]'''. Taking measures to check the performance or reliability of something, especially before putting it into widespread use or practice, or somebody, especially before hiring. Someone who or something that conducts [[testing]] is called a [[tester]]. Someone who or something that is tested is called a [[testee]]. If a [[testee]] is a [[human being]], this testing is called [[human testing]]. The [[testing]] that utilizes one or more artifacts such as a [[prototype]] or [[product]] is called [[artifact testing]]. The [[testing]] that utilizes no artifact is called [[natural testing]].<blockquote><table class="wikitable" width=100% style="text-align:center;"><tr><td rowspan="2" style="background-color:#fff;">'''[[Testing]]'''</td><td colspan="2">'''[[Tester]]'''</td></tr><tr><th>[[Human being|Human]]</th><th>[[Machine]]</th></tr><tr><th>[[Natural testing|Natural]]</th><td>[[Observation]], [[oral examination]], [[open voting]], [[interview]], etc.</td><td>[[Measurement and signature intelligence]], [[automatic data validation]], etc.</td></tr><tr><th>[[Artifact testing|Artifact]]</th><td>[[Questionnaire survey]], [[ballot voting]], [[user test]], etc.</td><td>[[Computer-based exam]], [[online survey]], etc.</td></tr></table></blockquote>
 
#'''[[Elicitation]]'''. Evoking or drawing out [[data]] from someone in reaction to one's own actions or questions. As an [[enterprise effort]], [[elicitation]] most commonly consists of two [[phase]]s: (a) to identify [[data source]]s and (2) to use [[elicitation technique]]s (e.g., [[facilitated workshop]]s, [[interview]]s, [[observation]]s, [[artifact testing]], etc.) to gather [[data]] from those [[data source|source]]s.
 
#'''[[Elicitation]]'''. Evoking or drawing out [[data]] from someone in reaction to one's own actions or questions. As an [[enterprise effort]], [[elicitation]] most commonly consists of two [[phase]]s: (a) to identify [[data source]]s and (2) to use [[elicitation technique]]s (e.g., [[facilitated workshop]]s, [[interview]]s, [[observation]]s, [[artifact testing]], etc.) to gather [[data]] from those [[data source|source]]s.

Revision as of 22:39, 23 April 2018

Validated Learning Quarter (hereinafter, the Quarter) is the first of four lectures of Product Quadrivium (hereinafter, the Quadrivium):

The Quadrivium is the first of seven modules of Septem Artes Administrativi, which is a course designed to introduce its learners to general concepts in business administration, management, and organizational behavior.


Outline

The predecessor lecture is Iterative Development Quarter.

Product discovery is the enterprise discovery of data needed to design or modify products that enterprises offer on the market. Organizationally, the data needed to design or modify these products is collected through idea generation, validated learning, monitoring, and market engagements. This particular lecture concentrates on validated learning because these enterprise efforts are the primary method for collecting data that emerges as a result of product developments.

Concepts

  1. Validated learning. The acquisition of knowledge through experience generated by trying out an idea and then measuring it against potential consumers to validate the effect. Each test of an idea is a single iteration in a larger process of many iterations whereby something is learnt and then applied to succeeding tests.
  2. Learning. Any relatively permanent change in behavior that occurs as a result of experience.
    • Lessons learned. The learning gained from the process of performing the project. Lessons learned may be identified at any point.
    • Lessons learned process. A process improvement technique used to learn about and improve on a process or project. A lessons learned session involves a special meeting in which the team explores what worked, what didn't work, what could be learned from the just-completed iteration, and how to adapt processes and techniques before continuing or starting anew.
  3. Experiment. A procedure carried out to support, refute, or validate a hypothesis. Experiments are commonly undertaken in order to make a discovery, test a hypothesis, or demonstrate a known fact.
  4. Testing. Taking measures to check the performance or reliability of something, especially before putting it into widespread use or practice, or somebody, especially before hiring. Someone who or something that conducts testing is called a tester. Someone who or something that is tested is called a testee. If a testee is a human being, this testing is called human testing. The testing that utilizes one or more artifacts such as a prototype or product is called artifact testing. The testing that utilizes no artifact is called natural testing.
    TestingTester
    HumanMachine
    NaturalObservation, oral examination, open voting, interview, etc.Measurement and signature intelligence, automatic data validation, etc.
    ArtifactQuestionnaire survey, ballot voting, user test, etc.Computer-based exam, online survey, etc.
  5. Elicitation. Evoking or drawing out data from someone in reaction to one's own actions or questions. As an enterprise effort, elicitation most commonly consists of two phases: (a) to identify data sources and (2) to use elicitation techniques (e.g., facilitated workshops, interviews, observations, artifact testing, etc.) to gather data from those sources.
  6. Concept artifact.
    • Wireframe. A sketchy representation of a prototype. Wireframes are commonly developed in order to arrange elements of a future system. For instance, a wireframe can serve as a rough guide for the layout of a website or app, either done with pen and paper or with wireframing software.
    • Mockup. A model of a design for a product developed or to be developed. Mockups are commonly used to test graphic designs. If a mockup possesses any degree of functionality, it is is considered to be a prototype.
  7. Prototype. A partial or preliminary conceptual model of a deliverable developed or to be developed; this model is used as a reference, publicity artifact, or data-gathering tool. A prototype allows measuring if an product idea attracts interest.
    • Low-fidelity prototype. A quick and easy translation of high-level design concepts into tangible and testable artefacts, giving an indication of the direction that the product is heading.
    • Paper prototype. A rough, often hand-sketched, drawing of a user interface, used in a usability test to gather feedback. Participants point to locations on the page that they would click, and screens are manually presented to the user based on the interactions they indicate.
    • Paper prototype. A type of usability testing where a user performs realistic tasks by interacting with a manual, early-stage version of the interface that is often manipulated by an individual who is upholding the illusion of computer interactivity. During this process, the details of how the interface is supposed to be used are withheld from the user.
    • Throw-away prototype. A prototype used to quickly uncover and clarify interface requirements using simple tools, sometimes just paper and pencil. Usually discarded when the final system has been developed.
    • Exploratory prototype. A prototype developed to explore or verify requirements.
    • Evolutionary prototype. A prototype that is continuously modified and updated in response to feedback from users.
    • Horizontal prototype. A prototype that shows a shallow, and possibly wide, view of the system's functionality, but which does not generally support any actual use or interaction.
    • Vertical prototype. A prototype that dives into the details of the interface, functionality, or both.
    • High-fidelity prototype. A prototype which is quite close to the final product, with lots of detail and a good indication of the final proposed aesthetics and functionality.
  8. Minimum viable product (MVP). A version of a new product that includes sufficient features to satisfy early adopters and allows a team to collect the maximum amount of validated learning about customers with the least effort.
    • Wizard of Oz minimum viable product (WoOMVP). A version of a product that looks functional, but it actually operated by a human behind the scenes, granting the appearance of automation.
    • Concierge minimum viable product (CMVP). A manual service simulating the same exact steps people would go through with a final product.
    • Piecemeal minimum viable product (PMVP). A functioning model of a product that takes advantage of existing tools and services in order to emulate the user experience process.
  9. Change control. A formal procedure used to ensure that changes to baselines are introduced in a controlled and coordinated manner in order to ensure that no unnecessary changes are made, that all changes are documented, that services are not unnecessarily disrupted and that resources are used efficiently.

Roles

  1. Change control board (CCB). A formally constituted group of stakeholders who is able to change requirement baselines or, in other words, to make decisions regarding the disposition and treatment of changing requirements. The most common decisions are approve or reject.
  2. Tester. A stakeholder responsible for assessing the quality of, and identifying defects in, a software application.
  3. User. A stakeholder, person, device, or system that directly or indirectly accesses a system.
    • End user. A person or system that directly interacts with the solution. End users can be humans who interface with the system, or systems that send or receive data files to or from the system.
  4. Actor(s). The human and nonhuman roles that interact with the system.

Methods

  1. Elicitation technique. Any technique utilized in order to gather enterprise data from human beings. Elicitation techniques are used in anthropology, cognitive science, counseling, education, knowledge engineering, linguistics, management, philosophy, psychology, and other fields. A person who interacts with human subjects in order to elicit information from them is called an elicitor. The most common techniques include interviews, brainstorming, focus groups, artifact testing, observation, and questionnaire survey.
    • Requirements workshop. A requirements workshop is a structured meeting in which a carefully selected group of stakeholders collaborate to define and or refine requirements under the guidance of a skilled neutral facilitator.
    • Observation. The data-gathering technique that is based on watching something or someone; an observation can also be a statement based on something one has seen, heard, or noticed. In business analysis, observation is a means to elicit requirements by conducting an assessment of the stakeholder's work environment.
    • Needfinding. Needfinding is the art of talking to people and discovering their needs; both those they might explicitly state, and those hidden beneath the surface. It is only in truly understanding people that we can gain meaningful insights to inspire and inform a final, impactful design.
  2. Interview. A data-gathering technique that represents an arranged meeting of people face-to-face, especially for consultation or other informational exchange.
    • Interview. A systematic approach to elicit information from a person or group of people in an informal or formal setting by asking relevant questions and documenting the responses.
    • User interview. Used for understanding the tasks and motivations of the user group for whom you are designing, user interviews may be formally scheduled, or just informal chats.
    • Structured interview. A planned interview designed to gather job-related information.
    • Unstructured interview. A short, casual interview made up of random questions.
    • Open-ended interview. Covers a variety of data-gathering activities, including a number of social science research methods.
  3. Stakeholder interview. A conversation with the key contacts in the client organization funding, selling, or driving the product.
    • Focus group. Small (usually, 5-15 individuals) and composed of representative members of a group whose ideas, attitudes, or opinions are sought. By asking initial questions and structuring the subsequent discussion, the facilitator or moderator can elicit enterprise data. A focus group may discuss a specific product, process, market, solution, project, and/or enterprise practices, related risks and estimates in an interactive group environment. Guided by the facilitator or moderator, the participants are asked to share their impressions, preferences, needs, use practices, responses to management regulations, etc.
    • Panel survey. Involves the random selection of a small number of representative individuals from a group, who agree to be available over an extended period - often one to three years. During that period, they serve as a stratified random sample of people from whom data can be elicited on a variety of topics.
    • Remote survey. A survey administers a set of written questions to stakeholders in order to collect responses from a large group in a relatively short period of time.
  4. Artifact testing. The data-gathering technique that is based on taking measures to check the performance and/or reliability of somebody, especially before making agreements, or something, especially before putting it into widespread use or practice.
    • Black box test. A test written without regard to how the software is implemented. These tests show only what the expected input and outputs will be.
    • User acceptance test. Test cases that users employ to judge whether the delivered system is acceptable. Each acceptance test describes a set of system inputs and expected results.
    • Acceptance test. The derivative from the acceptance criteria that verifies whether a feature is functional. The test has only two results: pass or fail. Acceptance criteria usually include one or more acceptance tests.
    • Usability test. A user sits in front of your website or app and you have them perform tasks and think out loud while doing so.
    • Contextual inquiry. Interviewing users in the location that they use the website or app, in order to understand their tasks and challenges.
    • Diary study. Asking users to record their experiences and thoughts about a product or task in a journal over a set period of time.
    • Unit testing. A short program fragment written for testing and verifying a piece of code once it is completed. A piece of code either passes or fails the unit test. The unit test (or a group of tests, known as a test suite) is the first level of testing a software development product.
    • User research. Observation techniques, task analysis, and other feedback methodologies which are used to focus on understanding user behaviors, needs, and motivations.
    • Alpha test. Controlled internal testing of a pre-production model, intended to detect design flaws or functionality deficiencies.
    • Beta test. External pilot-test after Alpha testing is complete and prior to commercial production. In beta testing, the product is released to a limited number of customers for testing under normal, everyday conditions in order to detect any flaws. (see 10 Experiments To Test Your Startup Hypothesis)
  5. Inspection. A formal type of peer review that utilizes a predefined and documented process, specific participant roles, and the capture of defect and process metrics. See also structured walkthrough.
    • Inspection. The data-gathering technique that is based on careful examination of something in order to either learn about its features or check whether its features confirm its specifications.
    • Inspection. Examination or measurement of work to verify whether an item or activity conforms to a specific requirement.
  6. Audit. A planned and documented activity performed by qualified personnel to determine by investigation, examination, or evaluation of objective evidence the adequacy and compliance with established procedures or the applicable documents and the effectiveness of implementation.
  7. Fail-fast. The process of starting work on a task or project, obtaining immediate feedback, and then determining whether to continue working on that task or take a different approach—that is, adapt. If a project is not working, it is best to determine that early on in the process rather than waiting until too much money and time has invested.
  8. Trial and error. A problem solving technique, which represents repeated, varied attempts to solve a problem continued until either success or stopping trying.
  9. Dogfooding. A company showing confidence in their own product by using it themselves. Derived from the expression “eating your own dog food.”
  10. Event-powered survey. The data-gathering technique that is based on a systematic study of behavior of people at arranged events such as pooling, sampling, and/or querying, either virtual or physical, undertaken in order to gather data primarily of the results of their behavior.

Instruments

  1. Prototyping tool.
    • Axure. A wireframing and interactive prototyping tool, available for both Windows and Mac.
    • Balsamiq Mockups. A wireframing and interactive prototyping tool, available for both Windows and Mac.
  2. Sandbox. An environment or location where experimentation is acceptable, without consequences for failure.
  3. Questionnaire. The data-gathering tool that represents a set of questions and other prompts composed for elicitation.
  4. Survey form. An online form designed to solicit feedback from current or potential users.
  5. Human testing. exams and quizzes.

Results

  1. Findings register.

Practices

The successor lecture is Business Analysis Quarter.

Materials

Recorded audio

Recorded video

Live sessions

Texts and graphics

See also