Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

metatests

metarhia3.3kMIT0.9.1TypeScript support: included

Simple to use test engine for Metarhia technology stack

test, testing, unittesting, unit-testing, tdd, tap, metarhia

readme

metatests

ci status snyk npm version npm downloads/month npm downloads license

metatests is an extremely simple to use test framework and runner for Metarhia technology stack built on the following principles:

  • Test cases are files, tests are either imperative (functions) or declarative (arrays and structures).

  • Assertions are done using the built-in Node.js assert module. The framework also provides additional testing facilities (like spies).

  • Tests can be run in parallel.

  • All tests are executed in isolated sandboxes. The framework allows to easily mock modules required by tests and provides ready-to-use mocks for timers and other core functionality.

  • Testing asynchronous operations must be supported.

  • Testing pure functions without asynchronous operations and state can be done without extra boilerplate code using DSL based on arrays.

    mt.case(
      'Test common.duration',
      { common },
      {
        // ...
        'common.duration': [
          ['1d', 86400000],
          ['10h', 36000000],
          ['7m', 420000],
          ['13s', 13000],
          ['2d 43s', 172843000],
          // ...
        ],
        // ...
      },
    );

    (Prior art)

  • The framework must work in Node.js and browsers (using Webpack or any other module bundler that supports CommonJS modules and emulates Node.js globals).

Contributors

API

Interface: metatests

case(caption, namespace, list, runner)

  • caption: <string> case caption
  • namespace: <Object> namespace to use in this case test
  • list: <Object> hash of <Array>, hash keys are function and method names. <Array> contains call parameters last <Array> item is an expected result (to compare) or <Function> (pass result to compare)
  • runner: <Runner> runner for this case test, optional, default: metatests.runner.instance

Create declarative test

class DeclarativeTest extends Test

DeclarativeTest.prototype.constructor(caption, namespace, list, options)
DeclarativeTest.prototype.run()
DeclarativeTest.prototype.runNow()

equal(val1, val2)

strictEqual(val1, val2)

class reporters.Reporter

reporters.Reporter.prototype.constructor(options)
  • options: <Object>
    • stream: <stream.Writable> optional
reporters.Reporter.prototype.error(test, error)

Fail test with error

reporters.Reporter.prototype.finish()
reporters.Reporter.prototype.log(...args)
reporters.Reporter.prototype.logComment(...args)
reporters.Reporter.prototype.record(test)
  • test: <Test>

Record test

class reporters.ConciseReporter extends Reporter

reporters.ConciseReporter.prototype.constructor(options)
reporters.ConciseReporter.prototype.error(test, error)
reporters.ConciseReporter.prototype.finish()
reporters.ConciseReporter.prototype.listFailure(test, res, message)
reporters.ConciseReporter.prototype.parseTestResults(test, subtest)
reporters.ConciseReporter.prototype.printAssertErrorSeparator()
reporters.ConciseReporter.prototype.printSubtestSeparator()
reporters.ConciseReporter.prototype.printTestSeparator()
reporters.ConciseReporter.prototype.record(test)

class reporters.TapReporter extends Reporter

reporters.TapReporter.prototype.constructor(options)
reporters.TapReporter.prototype.error(test, error)
reporters.TapReporter.prototype.finish()
reporters.TapReporter.prototype.listFailure(test, res, offset)
reporters.TapReporter.prototype.logComment(...args)
reporters.TapReporter.prototype.parseTestResults(test, offset = 0)
reporters.TapReporter.prototype.record(test)

class runner.Runner extends EventEmitter

runner.Runner.prototype.constructor(options)
runner.Runner.prototype.addTest(test)
runner.Runner.prototype.finish()
runner.Runner.prototype.removeReporter()
runner.Runner.prototype.resume()
runner.Runner.prototype.runTodo(active = true)
runner.Runner.prototype.setReporter(reporter)
runner.Runner.prototype.wait()

runner.instance

speed(caption, count, cases)

  • caption: <string> name of the benchmark
  • count: <number> amount of times ro run each function
  • cases: <Array> functions to check

Microbenchmark each passed function and compare results.

measure(cases[, options])

  • cases: <Array> cases to test, each case contains
    • fn: <Function> function to check, will be called with each args provided
    • name: <string> case name, function.name by default
    • argCases: <Array> array of arguments to create runs with. When omitted fn will be run once without arguments. Total amount of runs will be runs * argCases.length.
    • n: <number> number of times to run the test, defaultCount from options by default
  • options: <Object>
    • defaultCount: <number> number of times to run the function by default, default: 1e6
    • runs: <number> number of times to run the case, default: 20
    • preflight: <number> number of times to pre-run the case for each set of arguments, default: 10
    • preflightCount: <number> number of times to run the function in the preflight stage, default: 1e4
    • listener: <Object> appropriate function will be called to report events, optional
      • preflight: <Function> called when preflight is starting, optional
      • run: <Function> called when run is starting, optional
      • cycle: <Function> called when run is done, optional
      • done: <Function> called when all runs for given configurations are done, optional
        • name: <string> case name
        • args: <Array> current configuration
        • results: <Array> results of all runs with this configuration
      • finish: <Function> called when measuring is finished, optional
        • results: <Array> all case results

Returns: <Array> results of all cases as objects of structure

  • name: <string> case name
  • args: <Array> arguments for this run
  • count: <number> number of times case was run
  • time: <number> time in nanoseconds it took to make count runs
  • result: <any> result of one of the runs

Microbenchmark each passed configuration multiple times

convertToCsv(results)

  • results: <Array> all results from measure run

Returns: <string> valid CSV representation of the results

Convert metatests.measure result to csv.

class ImperativeTest extends Test

ImperativeTest.prototype.constructor(caption, func, options)
ImperativeTest.prototype.afterEach(func)

Set a function to run after each subtest.

The function must either return a promise or call a callback.

ImperativeTest.prototype.assert(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is truthy.

ImperativeTest.prototype.assertNot(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is falsy.

ImperativeTest.prototype.bailout([err][, message])

Fail this test and throw an error.

If both err and message are provided err.toString() will be appended to message.

ImperativeTest.prototype.beforeEach(func)
  • func: <Function>
    • subtest: <ImperativeTest> test instance
    • callback: <Function>
      • context: <any> context of the test. It will pe passed as a second argument to test function and is available at test.context
    • Returns: <Promise>|<void> nothing or Promise resolved with context

Set a function to run before each subtest.

The function must either return a promise or call a callback.

ImperativeTest.prototype.case(message, namespace, list, options = {})

Create a declarative case() subtest of this test.

ImperativeTest.prototype.cb([msg][, cb])

Returns: <Function> function to pass to callback

Create error-first callback wrapper to perform automatic checks.

This will check for test.mustCall() the callback and {test.error()} the first callback argument.

ImperativeTest.prototype.cbFail([fail][, cb[, afterAllCb]])
  • fail: <string> test.fail message
  • cb: <Function> callback function to call if there was no error
  • afterAllCb: <Function> function called after callback handling

Returns: <Function> function to pass to callback

Create error-first callback wrapper to fail test if call fails.

This will check for test.mustCall() the callback and if the call errored will use test.fail() and test.end()

ImperativeTest.prototype.contains(actual, subObj[, message[, sort[, test]]])
  • actual: <any> actual data
  • subObj: <any> expected properties
  • message: <string> description of the check, optional
  • sort: <boolean | Function> if true or a sort function sort data properties, default: false
  • cmp: <Function> test function, default: compare.strictEqual
    • actual: <any>
    • expected: <any>
    • Returns: <boolean> true if actual is equal to expected, false otherwise

Check that actual contains all properties of subObj.

Properties will be compared with test function.

ImperativeTest.prototype.containsGreedy(actual, subObj[, message[, sort[, test]]])
  • actual: <any> actual data
  • subObj: <any> expected properties
  • message: <string> description of the check, optional
  • cmp: <Function> test function, default: compare.strictEqual
    • actual: <any>
    • expected: <any>
    • Returns: <boolean> true if actual is equal to expected, false otherwise

Check greedily that actual contains all properties of subObj.

Similar to test.contains() but will succeed if at least one of the properties in actual match the one in subObj.

ImperativeTest.prototype.defer(fn, options)
  • fn: <Function> function to call before the end of test. Can return a promise that will defer the end of test.
  • options: <Object>
    • ignoreErrors: <boolean> ignore errors from fn function, default: false

Defer a function call until the 'before' end of test.

ImperativeTest.prototype.doesNotThrow(fn[, message])

Check that fn doesn't throw.

ImperativeTest.prototype.end()

Finish the test.

This will fail if the test has unfinished subtests or plan is not complete.

ImperativeTest.prototype.endAfterSubtests()

Mark this test to call end after its subtests are done.

ImperativeTest.prototype.equal(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for non-strict equality.

ImperativeTest.prototype.error(err[, message])
  • err: <any> error to check
  • message: <string> description of the check, optional

Fail if err is instance of Error.

ImperativeTest.prototype.fail([message][, err])
  • message: <string | Error> failure message or error, optional
  • err: <Error> error, optional

Fail this test recording failure message.

This doesn't call test.end().

ImperativeTest.prototype.is(checkFn, val[, message])
  • checkFn: <Function> condition function
    • val: <any> provided value
  • Returns: <boolean> true if condition is satisfied and false otherwise
  • val: <any> value to check the condition against
  • message: <string> check message, optional

Check whether val satisfies custom checkFn condition.

ImperativeTest.prototype.isArray(val[, message])
  • val: <any> value to check
  • message: <string> check message, optional

Check if val satisfies Array.isArray.

ImperativeTest.prototype.isBuffer(val[, message])
  • val: <any> value to check
  • message: <string> check message, optional

Check if val satisfies Buffer.isBuffer.

ImperativeTest.prototype.isError(actual[, expected[, message]])
  • actual: <any> actual error to compare
  • expected: <any> expected error, default: new Error()
  • message: <string> description of the check, optional

Check if actual is equal to expected error.

ImperativeTest.prototype.isRejected(input, err)
  • input: <Promise | Function> promise of function returning thenable
  • err: <any> value to be checked with test.isError() against rejected value

Check that input rejects.

ImperativeTest.prototype.isResolved(input[, expected])
  • input: <Promise | Function> promise of function returning thenable
  • expected: <any> if passed it will be checked with test.strictSame() against resolved value

Verify that input resolves.

ImperativeTest.prototype.mustCall([fn[, count[, name]]])
  • fn: <Function> function to be checked, default: () => {}
  • count: <number> amount of times fn must be called, default: 1
  • name: <string> name of the function, default: 'anonymous'

Returns: <Function> function to check with, will forward all arguments to fn, and result from fn

Check that fn is called specified amount of times.

ImperativeTest.prototype.mustNotCall([fn[, name]])
  • fn: <Function> function to not be checked, default: () => {}
  • name: <string> name of the function, default: 'anonymous'

Returns: <Function> function to check with, will forward all arguments to fn, and result from fn

Check that fn is not called.

ImperativeTest.prototype.notEqual(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for non-strict not-equality.

ImperativeTest.prototype.notOk(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is falsy.

ImperativeTest.prototype.notSameTopology(obj1, obj2[, message])
  • obj1: <any> actual data
  • obj2: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected to not have the same topology.

ImperativeTest.prototype.ok(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is truthy.

ImperativeTest.prototype.on(name, listener)
ImperativeTest.prototype.pass([message])

Record a passing assertion.

ImperativeTest.prototype.plan(n)

Plan this test to have exactly n assertions and end test after

this amount of assertions is reached.

ImperativeTest.prototype.regex(regex, input[, message])

Test whether input matches the provided RegExp.

ImperativeTest.prototype.rejects(input, err)
  • input: <Promise | Function> promise of function returning thenable
  • err: <any> value to be checked with test.isError() against rejected value

Check that input rejects.

ImperativeTest.prototype.resolves(input[, expected])
  • input: <Promise | Function> promise of function returning thenable
  • expected: <any> if passed it will be checked with test.strictSame() against resolved value

Verify that input resolves.

ImperativeTest.prototype.run()

Start running the test.

ImperativeTest.prototype.same(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for non-strict equality.

ImperativeTest.prototype.sameTopology(obj1, obj2[, message])
  • obj1: <any> actual data
  • obj2: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected to have same topology.

Useful for comparing objects with circular references for equality.

ImperativeTest.prototype.strictEqual(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for strict equality.

ImperativeTest.prototype.strictNotSame(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for strict non-equality.

ImperativeTest.prototype.strictSame(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for strict equality.

ImperativeTest.prototype.test(caption, func, options)
  • caption: <string> name of the test
  • func: <Function> test function
    • test: <ImperativeTest> test instance
  • options: <TestOptions>
    • run: <boolean> auto start test, default: true
    • async: <boolean> if true do nothing, if false auto-end test on nextTick after func run, default: true
    • timeout: <number> time in milliseconds after which test is considered timeouted.
    • parallelSubtests: <boolean> if true subtests will be run in parallel, otherwise subtests are run sequentially, default: false
    • dependentSubtests: <boolean> if true each subtest will be executed sequentially in order of addition to the parent test short-circuiting if any subtest fails, default: false

Returns: <ImperativeTest> subtest instance

Create a subtest of this test.

If the subtest fails this test will fail as well.

ImperativeTest.prototype.testAsync(message, func, options = {})

Create an asynchronous subtest of this test.

Simple wrapper for test.test() setting async option to true.

ImperativeTest.prototype.testSync(message, func, options = {})

Create a synchronous subtest of this test

Simple wrapper for test.test() setting async option to false.

ImperativeTest.prototype.throws(fn[, expected[, message]])
  • fn: <Function> function to run
  • expected: <any> expected error, default: new Error()
  • message: <string> description of the check, optional

Check that fn throws expected error.

ImperativeTest.prototype.type(obj, type[, message])
  • obj: <any> value to check
  • type: <string | Function> class or class name to check
  • message: <string> description of the check, optional

Check if obj is of specified type.

test(caption, func[, options[, runner]])

  • caption: <string> name of the test
  • func: <Function> test function
    • test: <ImperativeTest> test instance
  • options: <TestOptions>
    • run: <boolean> auto start test, default: true
    • async: <boolean> if true do nothing, if false auto-end test on nextTick after func run, default: true
    • timeout: <number> time in milliseconds after which test is considered timeouted.
    • parallelSubtests: <boolean> if true subtests will be run in parallel, otherwise subtests are run sequentially, default: false
    • dependentSubtests: <boolean> if true each subtest will be executed sequentially in order of addition to the parent test short-circuiting if any subtest fails, default: false
  • runner: <Runner> runner instance to use to run this test

Returns: <ImperativeTest> test instance

Create a test case.

testSync(caption, func, options = {}, runner = runnerInstance)

Create a synchronous test

Simple wrapper for test() setting async option to false.

testAsync(caption, func, options = {}, runner = runnerInstance)

Create an asynchronous test

Simple wrapper for test() setting async option to true.

changelog

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Unreleased

0.9.1 - 2025-05-23

  • Add node.js 23 and 24 to CI
  • Update dependencies

0.9.0 - 2024-09-01

  • Update dependencies

0.8.2 - 2022-03-17

Fixed

  • bug where test.endAfterSubtests() would cause an error if the test was finished before endAfterSubtests got a chance to process its checks.
  • remove warnings from yaml library that are not actionable right now see issue.

0.8.1 - 2022-03-06

Added

  • extract contains/containsGreedy matchers to compare fns.
  • test#defer(fn, { ignoreErrors: false }) to execute some function right before the end of the test. It is useful for test clean up callbacks. If the fn returns a Promise it will be waited upon and delay the test finish.

Fixed

  • comparison of errors in compare.strictEqual/compare.equal.

0.8.0 - 2021-01-14

Added

  • metatests.measure() function to perform microbenchmarks with multiple configuration options.
  • compare.R script to perform statistically significant benchmark result analysis.
  • metatests.convertToCsv() function to convert metatests.measure() results to valid CSV format.
  • metatests speed command to run simple speed tests on exported from file functions via cli.
  • metatests measure command to run comprehensive speed tests on exported from file functions via cli and comparing different implementations.
  • TypeScript typings for all APIs of metatests.
  • Support for .cjs and .mjs test file extensions.

Fixed

  • Comparison of Map and Set objects.
  • Printing of Error/Date/RegExp object in TapReporter for JS vm sandboxes

Removed

  • BREAKING: Dropped support for Node.js 10

0.7.2 - 2020-07-27

Added

  • test#is(checkFn, val, message) that allows passing custom comparator function to avoid using test#assert() that will only display true/false result. test#isArray() test#isBuffer() utilities for test#is() that just call it with Array.isArray and Buffer.isBuffer appropriately.
  • test.regex(regex, input, message) to simplify checking for regex pattern match. This avoids using test.assert() and shows actual pattern/input in the test output.

Changed

  • cli to replace forward slash (/) in --exclude option with OS specific path separator.

Fixed

  • CHANGELOG.md Changed/Fixed title level in 0.7.1 version.
  • null/undefined as uncaughtException are properly handled.

Security

  • Update project dependencies and fix security issues.

0.7.1 - 2019-07-05

Added

  • This CHANGELOG.md file.
  • test.containsGreedy() that works similar to test.contains() but will greedily search for at least one successful comparison for each element.

Changed

  • test.contains() to support more types. It can now be used with Array, Set, Map and as it was with objects and also as combinations of those types (i.e. compare Set and Array, Map and Object).
  • Use original call stack in test.mustCall()/test.mustNotCall() (call stack of caller).
  • Errors stringified by TapReporter won't have !error tag anymore which will result in them being displayed as simple objects in diff and avoid hiding necessary details.

Fixed

  • Duplicate test numbers in TapReporter 'Failed' output.

0.7.0 - 2019-05-15

Added

  • Handle test.resolves(promise) differently from test.resolves(promise, undefined). The former version will not check for strictSame of the promise result with undefined but the latter will.

Changed

  • Increase stack trace size when possible.
  • Explicitly record 'rejects'/'resolves' in test.resolves()/test.rejects().
  • Change CLI waiting timeout behavior. If there are failures the process will now exit immediately, otherwise wait for normal process finish or exit with code 1 after timeout.
  • Use promises in beforeEach/afterEach callback handling. This delays test execution and finish by 1 event loop tick respectively.
  • Enforce test failure and notify of multiple _end calls.
  • Preserve stack of original error from 'erroer' when emitting.

Fixed

  • Call subtest._end() on subtests in a queue upon parent parent._end().

0.6.6 - 2019-05-11

Added

  • equalTopology to compare with circular references.
  • test.sameTopology() to use compare.sameTopology().
  • test.resolves(), test.rejects() utilities.
  • Support promises in test.beforeEach()/test.afterEach().

Changed

  • Add error to the test.fail() signature. It can now be used as test.fail(msg), test.fail(err), test.fail(msg, err).
  • Call test.end()/test.endAfterSubtests() on test promise resolve.
  • Harden argument alias check in test.fail().
  • Use new test.fail(msg, err) interface in code.
  • Rename equalWithCircular to sameTopology.
  • Use tap-yaml in cli and to properly stringify in TapReporter.
  • Make subtest result in parent more robust by using test.success.
  • Move test.context initilization to Test.

Fixed

  • test.type() on objects with no constructor.

0.6.5 - 2019-04-10

Added

  • Support for running test code in worker_threads.

Changed

  • Move setting default TapReporter type to cli.
  • Output filename in case of error in TapReporter.

Fixed

  • Reporting of 'error' events on Test in TapReporter.

0.6.4 - 2019-04-08

Changed

  • Use yargs instead of commander for cli.
  • Check for incorrect values of plan.

Fixed

  • test.plan() + test.mustCall() interoperability.
  • Test time report of sync test.
  • Properly report plan after timeout.

0.6.3 - 2019-04-01

Fixed

  • Sync ImperativeTest without function finish.
  • TapReporter with timeout result.

0.6.2 - 2019-03-29

Added

  • logComment to Reporter interface.
  • Test execution time reporting.

Changed

  • Make imperative.end-before-timeout test more robust.
  • Use TapReporter type 'tap' on nonTTY and 'classic' on TTY streams.

Fixed

  • metadata.filepath test not calling 'end'.

0.6.1 - 2019-03-26

Changed

  • Remove usage of console from Reporters.

0.6.0 - 2019-03-25

Added

  • Unit tests.
  • TapReporter complying to TAP 13.
  • Custom exit timeout.
  • 'log' inside test via 'test.log'.
  • Support for TAP formatters.

Changed

  • Allow to run DeclarativeTest from ImperativeTest.
  • Align ImperativeTest constructor with the usage.
  • Use node util.inspect() if available for better test results output.
  • Improve TapReporter output.
  • Update project dependencies.
  • Update stryker to the 1.0+.
  • Make tap-classic default reporter.

Removed

  • Dropped CLI support for browsers.
  • Dropped Node.js 6 support.

Fixed

  • Handle promise rejections from test function.
  • DeclarativeTest.constructor() interface to comply with 'case'.

0.5.0 - 2019-02-14

Changed

  • Remove 'err' argument from test.cbFail() callback.
  • Allow to pass afterAll callback to test.cbFail().

Fixed

  • Don't call subtest function if test ended in test.beforeEach().

0.4.1 - 2019-02-12

Added

  • test.contains() to allow partial obj checks.
  • test.cb()/test.cbFail() to avoid error handling boilerplate code. The former will perform the test.error() check on the first passed to the callback argument and forward them, the latter will in case of error perform test.fail(), test.end() and will NOT call the supplied callback, othrwise calls the callback with other arguments (besides the first).

Changed

  • Add 'filename' to the Test metadata.
  • Report 'filename' in ConciseReporter if available.
  • Use prettier for code formatting.

Fixed

  • Don't run subtest TODO tests by default, respect runTodo in Runner.
  • Filename resolution for renamed package directory.

0.4.0 - 2018-12-19

Added

  • todo option for cli.

Changed

  • Allow to pass Error to test.bailout.

Fixed

  • Crash on unhandledException in dependentSubtests.
  • Flakiness of unhandledExceptions handling test.
  • Typo unhandledExeption -> unhandledException.
  • Properly check Error instance in test.error().
  • Properly extract bailout and dependentSubtests tests.
  • Race condition in unhandledException test.

0.3.0 - 2018-11-21

Added

  • test.bailout() that will cease execution.
  • Runner.wait()/Runner.resume() to postpone 'finish' report.

Changed

  • Harden testAsync async enforcement test.
  • Make fail only report 'failed' not end test.

Fixed

  • Comparison in declarative tests.

0.2.4 - 2018-11-12

Added

  • test.plan() tests.
  • test.mustCall()/test.mustNotCall() tests.
  • Setter for running 'todo' in Runner.
  • Support for dependentSubtests option to ImperativeTest that will inform the test to stop running its subtests as soon as at least of them fails.

Changed

  • Add default empty lambda as 'fn' in test.mustCall()/test.mustNotCall().

Fixed

  • Omit type of undefined values in reporter.
  • todo test reporting.

0.2.3 - 2018-11-01

Changed

  • Improve reporter output.
  • Move eslint related dependencies to devDependencies.

0.2.2 - 2018-10-31

Added

  • Simple function comparison via toString().

Changed

  • Report only existing result properties.
  • Replace obsolete PhantomJS with ChromeHeadless.
  • Use eslint-config-metarhia and fix linting issues.
  • Show config depending on its log level.

Removed

  • Dropped support for Node.js 9.

Fixed

  • Remove actual/expected args from unrelated checks.
  • Omit 'type' while reporting undefined and null.
  • Allow Runner.reporter to be null.
  • Flaky 'nested test that ends after subtests'.
  • Properly output Error instance.
  • Cli incorrect exit code reporting.
  • Error stack missing important information.

0.2.1 - 2018-09-27

Added

  • Mark runner result as failed upon 'error' event.
  • Return (sub)test instance on subtest creation.
  • Tests to improve coverage.
  • Enhance cli to support running tests in Node.js and browser.

Changed

  • Move failure exit from reporter to runner.
  • Improve assert failure reporting.
  • Use 'domain's to catch unhandledExeptions in tests.
  • Slightly simplify and refactor compare.js.
  • Update example in README.md.
  • Move benchmarks from ./test to separate folder.
  • Remove external variable declarations from loops.

Removed

  • Remove unnecessary exit-code test.

Fixed

  • Add missed return in testSync, testAsync aliases.
  • Remove flakiness of setTimeout from test.
  • Don't mark finished tests as failures in Runner.
  • Flaky afterEach() test.
  • Make timeouted tests emit 'done', not 'error'.
  • Incorrect description in example test case.

0.2.0 - 2018-08-29

Added

  • New functionality.
    • Report:
      • ConciseReporter that just prints minimal needed info and is used by default.
    • Runner:
      • New runner class that currently only listens to the tests and progagates treir results to Reporter. Uses ConciseReporter by default.
      • Method 'addTest' adds test to current runner to observe.
      • Runner emits 'finish' event when all of the tests it observers have finished.
    • Case:
      • case() function that just creates DeclarativeTest and uses default runner if not provided.
    • Test:
      • isError() to check if you have received an error (previously you'd have to use test.ok(err instanceof Error) or smth similar).
      • fail() to fail the test immediately with specified error message.
      • test() that adds a subtest to this test. This test will end until all subtests have finished (has 2 aliases testSync() and testAsync()).
      • beforeEach() method (test, callback) - it will be run before each subtest and its result (must be object) passed to callback will beforeEach() passed to the subtest (test.test('caption', (subtest, context) => {})).
      • afterEach() method (test, callback) - it will be run after each subtest and next (sequential) subtest will not be run until this method calls callback.
      • endAfterSubtests() method to automatically end this test after all of the subtests has finished.
      • strictNotSame() (same as strictNotSame() but for strict equality).
      • Allow to listen on test finish event .on('done', (test) => {}).
      • 'error' event for errors that happed after test has ended (signature is (test, error)). Example errors: check after end, end after end, end on test-wit-plan etc.
      • Test timeout (30s default) that will fail the test if it hasn't finished within the timeout time.

Changed

  • Refactor typeof usage for metatests.
  • Refactor whole tests to remove cyclic deps.
    • get rid of cyclic dependencies between modules.
    • Namespaces:
      • Removed and replaced with per-case namespace.
    • Report:
      • Refactor Reporter class with general reporting functionality.
      • report() method no longer triggers any tests - it only reports overall result.
      • record() is called on each Test to parse/save the results of individual tests.
      • finish() is called by when this reporter has to finish reporting and possibly print some general info.
      • error() will be called when the error has occured after the test have finished.
    • Case:
      • Refactor without globals and implicit dependencies on other modules.
      • Clean up code, avoid unnecessary checks.
      • Rename to DeclarativeTest that implements Test for declarative tests and uses runNow() method to run what previously 'case' method did.
    • Test:
      • File renamed to imperative-test.js.
      • Refactor with es6 classes, clean up code and aliases.
      • Now each test is separate and therefore any test-file can be run as a separate program via just node (test will be by default run on nextTick if not disabled via 'run = false' option).
      • All results are recorded including successful (this will allow to provide rich error reporting later on and avoid unnecessary variables to keep track of).
      • notOk() is now an alias of assetNot() (previously it was what fail() does now).
      • test() method subtests can be run in parallel, it is controlled by the parallelSubtests (false default) option on the parent test.
      • Test plan now properly checks and finishes the test.
      • Tests can now have metadata.
      • Tests now store results of all checks in 'results'.

0.1.12 - 2018-08-07

Fixed

  • Queuing tests execution.

0.1.11 - 2018-06-27

Changed

  • Use eslint for tests.
  • Finish test automatically if test.plan(n) used.

Removed

  • Remove .bithoundrc and bithound badge in README.md.

0.1.10 - 2018-06-26

Changed

  • Display report sequential on done.

Fixed

  • Error thrown by comparing object with null.

0.1.9 - 2018-05-24

Changed

  • Run test sequential by default.
  • Change [object Object] representation to json.

0.1.8 - 2018-05-15

Added

  • test.type() and test.error() methods.

Changed

  • Improve reporting.

0.1.7 2018-05-14

Changed

  • Move metarhia-common tests to same-named repo.
  • Restructure modules.
  • Unify reporting and shorten stack.
  • Increment statistics and exit 1.

0.1.6 - 2018-05-12

Added

  • Tests for equal().
  • Tests for strictEqual().

Fixed

  • equal() function.
  • strictEqual() function.

0.1.5 - 2018-05-08

Added

  • throws() and doesNotThrow() methods.

0.1.4 - 2018-05-07

Fixed

  • isArray spelling.

0.1.3 - 2018-05-07

Added

  • Deep compare.

0.1.2 - 2018-05-05

Added

  • API for imperative tests.

Changed

  • Render results output.

0.1.1 - 2018-05-02

Added

  • Tests for metarhia-common.

Changed

  • Change badges.
  • Update examples.

0.1.0 - 2018-04-29

Added

  • The first version of the metatests package.