Import Cobalt 16.154703
diff --git a/src/v8/tools/testrunner/README b/src/v8/tools/testrunner/README
deleted file mode 100644
index 0771ef9..0000000
--- a/src/v8/tools/testrunner/README
+++ /dev/null
@@ -1,168 +0,0 @@
-Test suite runner for V8, including support for distributed running.
-====================================================================
-
-
-Local usage instructions:
-=========================
-
-Run the main script with --help to get detailed usage instructions:
-
-$ tools/run-tests.py --help
-
-The interface is mostly the same as it was for the old test runner.
-You'll likely want something like this:
-
-$ tools/run-tests.py --nonetwork --arch ia32 --mode release
-
---nonetwork is the default on Mac and Windows. If you don't specify --arch
-and/or --mode, all available values will be used and run in turn (e.g.,
-omitting --mode from the above example will run ia32 in both Release and Debug
-modes).
-
-
-Networked usage instructions:
-=============================
-
-Networked running is only supported on Linux currently. Make sure that all
-machines participating in the cluster are binary-compatible (e.g. mixing
-Ubuntu Lucid and Precise doesn't work).
-
-Setup:
-------
-
-1.) Copy tools/test-server.py to a new empty directory anywhere on your hard
- drive (preferably not inside your V8 checkout just to keep things clean).
- Please do create a copy, not just a symlink.
-
-2.) Navigate to the new directory and let the server setup itself:
-
-$ ./test-server.py setup
-
- This will install PIP and UltraJSON, create a V8 working directory, and
- generate a keypair.
-
-3.) Swap public keys with someone who's already part of the networked cluster.
-
-$ cp trusted/`cat data/mypubkey`.pem /where/peers/can/see/it/myname.pem
-$ ./test-server.py approve /wherever/they/put/it/yourname.pem
-
-
-Usage:
-------
-
-1.) Start your server:
-
-$ ./test-server.py start
-
-2.) (Optionally) inspect the server's status:
-
-$ ./test-server.py status
-
-3.) From your regular V8 working directory, run tests:
-
-$ tool/run-tests.py --arch ia32 --mode debug
-
-4.) (Optionally) enjoy the speeeeeeeeeeeeeeeed
-
-
-Architecture overview:
-======================
-
-Code organization:
-------------------
-
-This section is written from the point of view of the tools/ directory.
-
-./run-tests.py:
- Main script. Parses command-line options and drives the test execution
- procedure from a high level. Imports the actual implementation of all
- steps from the testrunner/ directory.
-
-./test-server.py:
- Interface to interact with the server. Contains code to setup the server's
- working environment and can start and stop server daemon processes.
- Imports some stuff from the testrunner/server/ directory.
-
-./testrunner/local/*:
- Implementation needed to run tests locally. Used by run-tests.py. Inspired by
- (and partly copied verbatim from) the original test.py script.
-
-./testrunner/objects/*:
- A bunch of data container classes, used by the scripts in the various other
- directories; serializable for transmission over the network.
-
-./testrunner/network/*:
- Equivalents and extensions of some of the functionality in ./testrunner/local/
- as required when dispatching tests to peers on the network.
-
-./testrunner/network/network_execution.py:
- Drop-in replacement for ./testrunner/local/execution that distributes
- test jobs to network peers instead of running them locally.
-
-./testrunner/network/endpoint.py:
- Receiving end of a network distributed job, uses the implementation
- in ./testrunner/local/execution.py for actually running the tests.
-
-./testrunner/server/*:
- Implementation of the daemon that accepts and runs test execution jobs from
- peers on the network. Should ideally have no dependencies on any of the other
- directories, but that turned out to be impractical, so there are a few
- exceptions.
-
-./testrunner/server/compression.py:
- Defines a wrapper around Python TCP sockets that provides JSON based
- serialization, gzip based compression, and ensures message completeness.
-
-
-Networking architecture:
-------------------------
-
-The distribution stuff is designed to be a layer between deciding which tests
-to run on the one side, and actually running them on the other. The frontend
-that the user interacts with is the same for local and networked execution,
-and the actual test execution and result gathering code is the same too.
-
-The server daemon starts four separate servers, each listening on another port:
-- "Local": Communication with a run-tests.py script running on the same host.
- The test driving script e.g. needs to ask for available peers. It then talks
- to those peers directly (one of them will be the locally running server).
-- "Work": Listens for test job requests from run-tests.py scripts on the network
- (including localhost). Accepts an arbitrary number of connections at the
- same time, but only works on them in a serialized fashion.
-- "Status": Used for communication with other servers on the network, e.g. for
- exchanging trusted public keys to create the transitive trust closure.
-- "Discovery": Used to detect presence of other peers on the network.
- In contrast to the other three, this uses UDP (as opposed to TCP).
-
-
-Give us a diagram! We love diagrams!
-------------------------------------
- .
- Machine A . Machine B
- .
-+------------------------------+ .
-| run-tests.py | .
-| with flag: | .
-|--nonetwork --network | .
-| | / | | .
-| | / | | .
-| v / v | .
-|BACKEND / distribution | .
-+--------- / --------| \ ------+ .
- / | \_____________________
- / | . \
- / | . \
-+----- v ----------- v --------+ . +---- v -----------------------+
-| LocalHandler | WorkHandler | . | WorkHandler | LocalHandler |
-| | | | . | | | |
-| | v | . | v | |
-| | BACKEND | . | BACKEND | |
-|------------- +---------------| . |---------------+--------------|
-| Discovery | StatusHandler <----------> StatusHandler | Discovery |
-+---- ^ -----------------------+ . +-------------------- ^ -------+
- | . |
- +---------------------------------------------------------+
-
-Note that the three occurrences of "BACKEND" are the same code
-(testrunner/local/execution.py and its imports), but running from three
-distinct directories (and on two different machines).
diff --git a/src/v8/tools/testrunner/base_runner.py b/src/v8/tools/testrunner/base_runner.py
new file mode 100644
index 0000000..8fc09ee
--- /dev/null
+++ b/src/v8/tools/testrunner/base_runner.py
@@ -0,0 +1,543 @@
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+
+from collections import OrderedDict
+import json
+import optparse
+import os
+import sys
+
+
+# Add testrunner to the path.
+sys.path.insert(
+ 0,
+ os.path.dirname(
+ os.path.dirname(os.path.abspath(__file__))))
+
+
+from local import testsuite
+from local import utils
+
+from testproc.shard import ShardProc
+
+
+BASE_DIR = (
+ os.path.dirname(
+ os.path.dirname(
+ os.path.dirname(
+ os.path.abspath(__file__)))))
+
+DEFAULT_OUT_GN = 'out.gn'
+
+ARCH_GUESS = utils.DefaultArch()
+
+# Map of test name synonyms to lists of test suites. Should be ordered by
+# expected runtimes (suites with slow test cases first). These groups are
+# invoked in separate steps on the bots.
+TEST_MAP = {
+ # This needs to stay in sync with test/bot_default.isolate.
+ "bot_default": [
+ "debugger",
+ "mjsunit",
+ "cctest",
+ "wasm-spec-tests",
+ "inspector",
+ "webkit",
+ "mkgrokdump",
+ "fuzzer",
+ "message",
+ "preparser",
+ "intl",
+ "unittests",
+ ],
+ # This needs to stay in sync with test/default.isolate.
+ "default": [
+ "debugger",
+ "mjsunit",
+ "cctest",
+ "wasm-spec-tests",
+ "inspector",
+ "mkgrokdump",
+ "fuzzer",
+ "message",
+ "preparser",
+ "intl",
+ "unittests",
+ ],
+ # This needs to stay in sync with test/d8_default.isolate.
+ "d8_default": [
+ # TODO(machenbach): uncomment after infra side lands.
+ #"debugger",
+ "mjsunit",
+ "webkit",
+ #"message",
+ #"preparser",
+ #"intl",
+ ],
+ # This needs to stay in sync with test/optimize_for_size.isolate.
+ "optimize_for_size": [
+ "debugger",
+ "mjsunit",
+ "cctest",
+ "inspector",
+ "webkit",
+ "intl",
+ ],
+ "unittests": [
+ "unittests",
+ ],
+}
+
+
+class ModeConfig(object):
+ def __init__(self, flags, timeout_scalefactor, status_mode, execution_mode):
+ self.flags = flags
+ self.timeout_scalefactor = timeout_scalefactor
+ self.status_mode = status_mode
+ self.execution_mode = execution_mode
+
+
+DEBUG_FLAGS = ["--nohard-abort", "--enable-slow-asserts", "--verify-heap"]
+RELEASE_FLAGS = ["--nohard-abort"]
+MODES = {
+ "debug": ModeConfig(
+ flags=DEBUG_FLAGS,
+ timeout_scalefactor=4,
+ status_mode="debug",
+ execution_mode="debug",
+ ),
+ "optdebug": ModeConfig(
+ flags=DEBUG_FLAGS,
+ timeout_scalefactor=4,
+ status_mode="debug",
+ execution_mode="debug",
+ ),
+ "release": ModeConfig(
+ flags=RELEASE_FLAGS,
+ timeout_scalefactor=1,
+ status_mode="release",
+ execution_mode="release",
+ ),
+ # Normal trybot release configuration. There, dchecks are always on which
+ # implies debug is set. Hence, the status file needs to assume debug-like
+ # behavior/timeouts.
+ "tryrelease": ModeConfig(
+ flags=RELEASE_FLAGS,
+ timeout_scalefactor=1,
+ status_mode="debug",
+ execution_mode="release",
+ ),
+ # This mode requires v8 to be compiled with dchecks and slow dchecks.
+ "slowrelease": ModeConfig(
+ flags=RELEASE_FLAGS + ["--enable-slow-asserts"],
+ timeout_scalefactor=2,
+ status_mode="debug",
+ execution_mode="release",
+ ),
+}
+
+
+class TestRunnerError(Exception):
+ pass
+
+
+class BuildConfig(object):
+ def __init__(self, build_config):
+ # In V8 land, GN's x86 is called ia32.
+ if build_config['v8_target_cpu'] == 'x86':
+ self.arch = 'ia32'
+ else:
+ self.arch = build_config['v8_target_cpu']
+
+ self.is_debug = build_config['is_debug']
+ self.asan = build_config['is_asan']
+ self.cfi_vptr = build_config['is_cfi']
+ self.dcheck_always_on = build_config['dcheck_always_on']
+ self.gcov_coverage = build_config['is_gcov_coverage']
+ self.msan = build_config['is_msan']
+ self.no_i18n = not build_config['v8_enable_i18n_support']
+ self.no_snap = not build_config['v8_use_snapshot']
+ self.predictable = build_config['v8_enable_verify_predictable']
+ self.tsan = build_config['is_tsan']
+ self.ubsan_vptr = build_config['is_ubsan_vptr']
+
+ def __str__(self):
+ detected_options = []
+
+ if self.asan:
+ detected_options.append('asan')
+ if self.cfi_vptr:
+ detected_options.append('cfi_vptr')
+ if self.dcheck_always_on:
+ detected_options.append('dcheck_always_on')
+ if self.gcov_coverage:
+ detected_options.append('gcov_coverage')
+ if self.msan:
+ detected_options.append('msan')
+ if self.no_i18n:
+ detected_options.append('no_i18n')
+ if self.no_snap:
+ detected_options.append('no_snap')
+ if self.predictable:
+ detected_options.append('predictable')
+ if self.tsan:
+ detected_options.append('tsan')
+ if self.ubsan_vptr:
+ detected_options.append('ubsan_vptr')
+
+ return '\n'.join(detected_options)
+
+
+class BaseTestRunner(object):
+ def __init__(self, basedir=None):
+ self.basedir = basedir or BASE_DIR
+ self.outdir = None
+ self.build_config = None
+ self.mode_name = None
+ self.mode_options = None
+
+ def execute(self, sys_args=None):
+ if sys_args is None: # pragma: no cover
+ sys_args = sys.argv[1:]
+ try:
+ parser = self._create_parser()
+ options, args = self._parse_args(parser, sys_args)
+
+ self._load_build_config(options)
+
+ try:
+ self._process_default_options(options)
+ self._process_options(options)
+ except TestRunnerError:
+ parser.print_help()
+ raise
+
+ args = self._parse_test_args(args)
+ suites = self._get_suites(args, options.verbose)
+
+ self._setup_env()
+ return self._do_execute(suites, args, options)
+ except TestRunnerError:
+ return 1
+ except KeyboardInterrupt:
+ return 2
+
+ def _create_parser(self):
+ parser = optparse.OptionParser()
+ parser.usage = '%prog [options] [tests]'
+ parser.description = """TESTS: %s""" % (TEST_MAP["default"])
+ self._add_parser_default_options(parser)
+ self._add_parser_options(parser)
+ return parser
+
+ def _add_parser_default_options(self, parser):
+ parser.add_option("--gn", help="Scan out.gn for the last built"
+ " configuration",
+ default=False, action="store_true")
+ parser.add_option("--outdir", help="Base directory with compile output",
+ default="out")
+ parser.add_option("--buildbot", help="DEPRECATED!",
+ default=False, action="store_true")
+ parser.add_option("--arch",
+ help="The architecture to run tests for")
+ parser.add_option("-m", "--mode",
+ help="The test mode in which to run (uppercase for ninja"
+ " and buildbot builds): %s" % MODES.keys())
+ parser.add_option("--shell-dir", help="DEPRECATED! Executables from build "
+ "directory will be used")
+ parser.add_option("-v", "--verbose", help="Verbose output",
+ default=False, action="store_true")
+ parser.add_option("--shard-count",
+ help="Split tests into this number of shards",
+ default=1, type="int")
+ parser.add_option("--shard-run",
+ help="Run this shard from the split up tests.",
+ default=1, type="int")
+
+ def _add_parser_options(self, parser):
+ pass
+
+ def _parse_args(self, parser, sys_args):
+ options, args = parser.parse_args(sys_args)
+
+ if any(map(lambda v: v and ',' in v,
+ [options.arch, options.mode])): # pragma: no cover
+ print 'Multiple arch/mode are deprecated'
+ raise TestRunnerError()
+
+ return options, args
+
+ def _load_build_config(self, options):
+ for outdir in self._possible_outdirs(options):
+ try:
+ self.build_config = self._do_load_build_config(outdir, options.verbose)
+ except TestRunnerError:
+ pass
+
+ if not self.build_config: # pragma: no cover
+ print 'Failed to load build config'
+ raise TestRunnerError
+
+ print 'Build found: %s' % self.outdir
+ if str(self.build_config):
+ print '>>> Autodetected:'
+ print self.build_config
+
+ # Returns possible build paths in order:
+ # gn
+ # outdir
+ # outdir/arch.mode
+ # Each path is provided in two versions: <path> and <path>/mode for buildbot.
+ def _possible_outdirs(self, options):
+ def outdirs():
+ if options.gn:
+ yield self._get_gn_outdir()
+ return
+
+ yield options.outdir
+ if options.arch and options.mode:
+ yield os.path.join(options.outdir,
+ '%s.%s' % (options.arch, options.mode))
+
+ for outdir in outdirs():
+ yield os.path.join(self.basedir, outdir)
+
+ # buildbot option
+ if options.mode:
+ yield os.path.join(self.basedir, outdir, options.mode)
+
+ def _get_gn_outdir(self):
+ gn_out_dir = os.path.join(self.basedir, DEFAULT_OUT_GN)
+ latest_timestamp = -1
+ latest_config = None
+ for gn_config in os.listdir(gn_out_dir):
+ gn_config_dir = os.path.join(gn_out_dir, gn_config)
+ if not os.path.isdir(gn_config_dir):
+ continue
+ if os.path.getmtime(gn_config_dir) > latest_timestamp:
+ latest_timestamp = os.path.getmtime(gn_config_dir)
+ latest_config = gn_config
+ if latest_config:
+ print(">>> Latest GN build found: %s" % latest_config)
+ return os.path.join(DEFAULT_OUT_GN, latest_config)
+
+ def _do_load_build_config(self, outdir, verbose=False):
+ build_config_path = os.path.join(outdir, "v8_build_config.json")
+ if not os.path.exists(build_config_path):
+ if verbose:
+ print("Didn't find build config: %s" % build_config_path)
+ raise TestRunnerError()
+
+ with open(build_config_path) as f:
+ try:
+ build_config_json = json.load(f)
+ except Exception: # pragma: no cover
+ print("%s exists but contains invalid json. Is your build up-to-date?"
+ % build_config_path)
+ raise TestRunnerError()
+
+ # In auto-detect mode the outdir is always where we found the build config.
+ # This ensures that we'll also take the build products from there.
+ self.outdir = os.path.dirname(build_config_path)
+
+ return BuildConfig(build_config_json)
+
+ def _process_default_options(self, options):
+ # We don't use the mode for more path-magic.
+ # Therefore transform the buildbot mode here to fix build_config value.
+ if options.mode:
+ options.mode = self._buildbot_to_v8_mode(options.mode)
+
+ build_config_mode = 'debug' if self.build_config.is_debug else 'release'
+ if options.mode:
+ if options.mode not in MODES: # pragma: no cover
+ print '%s mode is invalid' % options.mode
+ raise TestRunnerError()
+ if MODES[options.mode].execution_mode != build_config_mode:
+ print ('execution mode (%s) for %s is inconsistent with build config '
+ '(%s)' % (
+ MODES[options.mode].execution_mode,
+ options.mode,
+ build_config_mode))
+ raise TestRunnerError()
+
+ self.mode_name = options.mode
+ else:
+ self.mode_name = build_config_mode
+
+ self.mode_options = MODES[self.mode_name]
+
+ if options.arch and options.arch != self.build_config.arch:
+ print('--arch value (%s) inconsistent with build config (%s).' % (
+ options.arch, self.build_config.arch))
+ raise TestRunnerError()
+
+ if options.shell_dir: # pragma: no cover
+ print('Warning: --shell-dir is deprecated. Searching for executables in '
+ 'build directory (%s) instead.' % self.outdir)
+
+ def _buildbot_to_v8_mode(self, config):
+ """Convert buildbot build configs to configs understood by the v8 runner.
+
+ V8 configs are always lower case and without the additional _x64 suffix
+ for 64 bit builds on windows with ninja.
+ """
+ mode = config[:-4] if config.endswith('_x64') else config
+ return mode.lower()
+
+ def _process_options(self, options):
+ pass
+
+ def _setup_env(self):
+ # Use the v8 root as cwd as some test cases use "load" with relative paths.
+ os.chdir(self.basedir)
+
+ # Many tests assume an English interface.
+ os.environ['LANG'] = 'en_US.UTF-8'
+
+ symbolizer_option = self._get_external_symbolizer_option()
+
+ if self.build_config.asan:
+ asan_options = [
+ symbolizer_option,
+ 'allow_user_segv_handler=1',
+ 'allocator_may_return_null=1',
+ ]
+ if not utils.GuessOS() in ['macos', 'windows']:
+ # LSAN is not available on mac and windows.
+ asan_options.append('detect_leaks=1')
+ else:
+ asan_options.append('detect_leaks=0')
+ os.environ['ASAN_OPTIONS'] = ":".join(asan_options)
+
+ if self.build_config.cfi_vptr:
+ os.environ['UBSAN_OPTIONS'] = ":".join([
+ 'print_stacktrace=1',
+ 'print_summary=1',
+ 'symbolize=1',
+ symbolizer_option,
+ ])
+
+ if self.build_config.ubsan_vptr:
+ os.environ['UBSAN_OPTIONS'] = ":".join([
+ 'print_stacktrace=1',
+ symbolizer_option,
+ ])
+
+ if self.build_config.msan:
+ os.environ['MSAN_OPTIONS'] = symbolizer_option
+
+ if self.build_config.tsan:
+ suppressions_file = os.path.join(
+ self.basedir,
+ 'tools',
+ 'sanitizers',
+ 'tsan_suppressions.txt')
+ os.environ['TSAN_OPTIONS'] = " ".join([
+ symbolizer_option,
+ 'suppressions=%s' % suppressions_file,
+ 'exit_code=0',
+ 'report_thread_leaks=0',
+ 'history_size=7',
+ 'report_destroy_locked=0',
+ ])
+
+ def _get_external_symbolizer_option(self):
+ external_symbolizer_path = os.path.join(
+ self.basedir,
+ 'third_party',
+ 'llvm-build',
+ 'Release+Asserts',
+ 'bin',
+ 'llvm-symbolizer',
+ )
+
+ if utils.IsWindows():
+ # Quote, because sanitizers might confuse colon as option separator.
+ external_symbolizer_path = '"%s.exe"' % external_symbolizer_path
+
+ return 'external_symbolizer_path=%s' % external_symbolizer_path
+
+ def _parse_test_args(self, args):
+ if not args:
+ args = self._get_default_suite_names()
+
+ # Expand arguments with grouped tests. The args should reflect the list
+ # of suites as otherwise filters would break.
+ def expand_test_group(name):
+ return TEST_MAP.get(name, [name])
+
+ return reduce(list.__add__, map(expand_test_group, args), [])
+
+ def _get_suites(self, args, verbose=False):
+ names = self._args_to_suite_names(args)
+ return self._load_suites(names, verbose)
+
+ def _args_to_suite_names(self, args):
+ # Use default tests if no test configuration was provided at the cmd line.
+ all_names = set(utils.GetSuitePaths(os.path.join(self.basedir, 'test')))
+ args_names = OrderedDict([(arg.split('/')[0], None) for arg in args]) # set
+ return [name for name in args_names if name in all_names]
+
+ def _get_default_suite_names(self):
+ return []
+
+ def _expand_test_group(self, name):
+ return TEST_MAP.get(name, [name])
+
+ def _load_suites(self, names, verbose=False):
+ def load_suite(name):
+ if verbose:
+ print '>>> Loading test suite: %s' % name
+ return testsuite.TestSuite.LoadTestSuite(
+ os.path.join(self.basedir, 'test', name))
+ return map(load_suite, names)
+
+ # TODO(majeski): remove options & args parameters
+ def _do_execute(self, suites, args, options):
+ raise NotImplementedError()
+
+ def _create_shard_proc(self, options):
+ myid, count = self._get_shard_info(options)
+ if count == 1:
+ return None
+ return ShardProc(myid - 1, count)
+
+ def _get_shard_info(self, options):
+ """
+ Returns pair:
+ (id of the current shard [1; number of shards], number of shards)
+ """
+ # Read gtest shard configuration from environment (e.g. set by swarming).
+ # If none is present, use values passed on the command line.
+ shard_count = int(
+ os.environ.get('GTEST_TOTAL_SHARDS', options.shard_count))
+ shard_run = os.environ.get('GTEST_SHARD_INDEX')
+ if shard_run is not None:
+ # The v8 shard_run starts at 1, while GTEST_SHARD_INDEX starts at 0.
+ shard_run = int(shard_run) + 1
+ else:
+ shard_run = options.shard_run
+
+ if options.shard_count > 1:
+ # Log if a value was passed on the cmd line and it differs from the
+ # environment variables.
+ if options.shard_count != shard_count: # pragma: no cover
+ print("shard_count from cmd line differs from environment variable "
+ "GTEST_TOTAL_SHARDS")
+ if (options.shard_run > 1 and
+ options.shard_run != shard_run): # pragma: no cover
+ print("shard_run from cmd line differs from environment variable "
+ "GTEST_SHARD_INDEX")
+
+ if shard_run < 1 or shard_run > shard_count:
+ # TODO(machenbach): Turn this into an assert. If that's wrong on the
+ # bots, printing will be quite useless. Or refactor this code to make
+ # sure we get a return code != 0 after testing if we got here.
+ print "shard-run not a valid number, should be in [1:shard-count]"
+ print "defaulting back to running all tests"
+ return 1, 1
+
+ return shard_run, shard_count
diff --git a/src/v8/tools/testrunner/deopt_fuzzer.py b/src/v8/tools/testrunner/deopt_fuzzer.py
new file mode 100755
index 0000000..5e6b79f
--- /dev/null
+++ b/src/v8/tools/testrunner/deopt_fuzzer.py
@@ -0,0 +1,336 @@
+#!/usr/bin/env python
+#
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+
+from os.path import join
+import json
+import math
+import multiprocessing
+import os
+import random
+import shlex
+import sys
+import time
+
+# Adds testrunner to the path hence it has to be imported at the beggining.
+import base_runner
+
+from testrunner.local import execution
+from testrunner.local import progress
+from testrunner.local import testsuite
+from testrunner.local import utils
+from testrunner.local import verbose
+from testrunner.objects import context
+
+
+DEFAULT_SUITES = ["mjsunit", "webkit"]
+TIMEOUT_DEFAULT = 60
+
+# Double the timeout for these:
+SLOW_ARCHS = ["arm",
+ "mipsel"]
+MAX_DEOPT = 1000000000
+DISTRIBUTION_MODES = ["smooth", "random"]
+
+
+class DeoptFuzzer(base_runner.BaseTestRunner):
+ def __init__(self, *args, **kwargs):
+ super(DeoptFuzzer, self).__init__(*args, **kwargs)
+
+ class RandomDistribution:
+ def __init__(self, seed=None):
+ seed = seed or random.randint(1, sys.maxint)
+ print "Using random distribution with seed %d" % seed
+ self._random = random.Random(seed)
+
+ def Distribute(self, n, m):
+ if n > m:
+ n = m
+ return self._random.sample(xrange(1, m + 1), n)
+
+ class SmoothDistribution:
+ """Distribute n numbers into the interval [1:m].
+ F1: Factor of the first derivation of the distribution function.
+ F2: Factor of the second derivation of the distribution function.
+ With F1 and F2 set to 0, the distribution will be equal.
+ """
+ def __init__(self, factor1=2.0, factor2=0.2):
+ self._factor1 = factor1
+ self._factor2 = factor2
+
+ def Distribute(self, n, m):
+ if n > m:
+ n = m
+ if n <= 1:
+ return [ 1 ]
+
+ result = []
+ x = 0.0
+ dx = 1.0
+ ddx = self._factor1
+ dddx = self._factor2
+ for i in range(0, n):
+ result += [ x ]
+ x += dx
+ dx += ddx
+ ddx += dddx
+
+ # Project the distribution into the interval [0:M].
+ result = [ x * m / result[-1] for x in result ]
+
+ # Equalize by n. The closer n is to m, the more equal will be the
+ # distribution.
+ for (i, x) in enumerate(result):
+ # The value of x if it was equally distributed.
+ equal_x = i / float(n - 1) * float(m - 1) + 1
+
+ # Difference factor between actual and equal distribution.
+ diff = 1 - (x / equal_x)
+
+ # Equalize x dependent on the number of values to distribute.
+ result[i] = int(x + (i + 1) * diff)
+ return result
+
+
+ def _distribution(self, options):
+ if options.distribution_mode == "random":
+ return self.RandomDistribution(options.seed)
+ if options.distribution_mode == "smooth":
+ return self.SmoothDistribution(options.distribution_factor1,
+ options.distribution_factor2)
+
+
+ def _add_parser_options(self, parser):
+ parser.add_option("--command-prefix",
+ help="Prepended to each shell command used to run a test",
+ default="")
+ parser.add_option("--coverage", help=("Exponential test coverage "
+ "(range 0.0, 1.0) - 0.0: one test, 1.0 all tests (slow)"),
+ default=0.4, type="float")
+ parser.add_option("--coverage-lift", help=("Lifts test coverage for tests "
+ "with a small number of deopt points (range 0, inf)"),
+ default=20, type="int")
+ parser.add_option("--distribution-factor1", help=("Factor of the first "
+ "derivation of the distribution function"), default=2.0,
+ type="float")
+ parser.add_option("--distribution-factor2", help=("Factor of the second "
+ "derivation of the distribution function"), default=0.7,
+ type="float")
+ parser.add_option("--distribution-mode", help=("How to select deopt points "
+ "for a given test (smooth|random)"),
+ default="smooth")
+ parser.add_option("--dump-results-file", help=("Dump maximum number of "
+ "deopt points per test to a file"))
+ parser.add_option("--extra-flags",
+ help="Additional flags to pass to each test command",
+ default="")
+ parser.add_option("--isolates", help="Whether to test isolates",
+ default=False, action="store_true")
+ parser.add_option("-j", help="The number of parallel tasks to run",
+ default=0, type="int")
+ parser.add_option("-p", "--progress",
+ help=("The style of progress indicator"
+ " (verbose, dots, color, mono)"),
+ choices=progress.PROGRESS_INDICATORS.keys(),
+ default="mono")
+ parser.add_option("--seed", help="The seed for the random distribution",
+ type="int")
+ parser.add_option("-t", "--timeout", help="Timeout in seconds",
+ default= -1, type="int")
+ parser.add_option("--random-seed", default=0, dest="random_seed",
+ help="Default seed for initializing random generator")
+ parser.add_option("--fuzzer-random-seed", default=0,
+ help="Default seed for initializing fuzzer random "
+ "generator")
+ return parser
+
+
+ def _process_options(self, options):
+ # Special processing of other options, sorted alphabetically.
+ options.command_prefix = shlex.split(options.command_prefix)
+ options.extra_flags = shlex.split(options.extra_flags)
+ if options.j == 0:
+ options.j = multiprocessing.cpu_count()
+ while options.random_seed == 0:
+ options.random_seed = random.SystemRandom().randint(-2147483648,
+ 2147483647)
+ if not options.distribution_mode in DISTRIBUTION_MODES:
+ print "Unknown distribution mode %s" % options.distribution_mode
+ return False
+ if options.distribution_factor1 < 0.0:
+ print ("Distribution factor1 %s is out of range. Defaulting to 0.0"
+ % options.distribution_factor1)
+ options.distribution_factor1 = 0.0
+ if options.distribution_factor2 < 0.0:
+ print ("Distribution factor2 %s is out of range. Defaulting to 0.0"
+ % options.distribution_factor2)
+ options.distribution_factor2 = 0.0
+ if options.coverage < 0.0 or options.coverage > 1.0:
+ print ("Coverage %s is out of range. Defaulting to 0.4"
+ % options.coverage)
+ options.coverage = 0.4
+ if options.coverage_lift < 0:
+ print ("Coverage lift %s is out of range. Defaulting to 0"
+ % options.coverage_lift)
+ options.coverage_lift = 0
+ return True
+
+ def _calculate_n_tests(self, m, options):
+ """Calculates the number of tests from m deopt points with exponential
+ coverage.
+ The coverage is expected to be between 0.0 and 1.0.
+ The 'coverage lift' lifts the coverage for tests with smaller m values.
+ """
+ c = float(options.coverage)
+ l = float(options.coverage_lift)
+ return int(math.pow(m, (m * c + l) / (m + l)))
+
+ def _get_default_suite_names(self):
+ return DEFAULT_SUITES
+
+ def _do_execute(self, suites, args, options):
+ print(">>> Running tests for %s.%s" % (self.build_config.arch,
+ self.mode_name))
+
+ dist = self._distribution(options)
+
+ # Populate context object.
+ timeout = options.timeout
+ if timeout == -1:
+ # Simulators are slow, therefore allow a longer default timeout.
+ if self.build_config.arch in SLOW_ARCHS:
+ timeout = 2 * TIMEOUT_DEFAULT;
+ else:
+ timeout = TIMEOUT_DEFAULT;
+
+ timeout *= self.mode_options.timeout_scalefactor
+ ctx = context.Context(self.build_config.arch,
+ self.mode_options.execution_mode,
+ self.outdir,
+ self.mode_options.flags, options.verbose,
+ timeout, options.isolates,
+ options.command_prefix,
+ options.extra_flags,
+ False, # Keep i18n on by default.
+ options.random_seed,
+ True, # No sorting of test cases.
+ 0, # Don't rerun failing tests.
+ 0, # No use of a rerun-failing-tests maximum.
+ False, # No no_harness mode.
+ False, # Don't use perf data.
+ False) # Coverage not supported.
+
+ # Find available test suites and read test cases from them.
+ variables = {
+ "arch": self.build_config.arch,
+ "asan": self.build_config.asan,
+ "byteorder": sys.byteorder,
+ "dcheck_always_on": self.build_config.dcheck_always_on,
+ "deopt_fuzzer": True,
+ "gc_fuzzer": False,
+ "gc_stress": False,
+ "gcov_coverage": self.build_config.gcov_coverage,
+ "isolates": options.isolates,
+ "mode": self.mode_options.status_mode,
+ "msan": self.build_config.msan,
+ "no_harness": False,
+ "no_i18n": self.build_config.no_i18n,
+ "no_snap": self.build_config.no_snap,
+ "novfp3": False,
+ "predictable": self.build_config.predictable,
+ "simulator": utils.UseSimulator(self.build_config.arch),
+ "simulator_run": False,
+ "system": utils.GuessOS(),
+ "tsan": self.build_config.tsan,
+ "ubsan_vptr": self.build_config.ubsan_vptr,
+ }
+ num_tests = 0
+ test_id = 0
+
+ # Remember test case prototypes for the fuzzing phase.
+ test_backup = dict((s, []) for s in suites)
+
+ for s in suites:
+ s.ReadStatusFile(variables)
+ s.ReadTestCases(ctx)
+ if len(args) > 0:
+ s.FilterTestCasesByArgs(args)
+ s.FilterTestCasesByStatus(False)
+
+ test_backup[s] = s.tests
+ analysis_flags = ["--deopt-every-n-times", "%d" % MAX_DEOPT,
+ "--print-deopt-stress"]
+ s.tests = [t.create_variant(t.variant, analysis_flags, 'analysis')
+ for t in s.tests]
+ num_tests += len(s.tests)
+ for t in s.tests:
+ t.id = test_id
+ t.cmd = t.get_command(ctx)
+ test_id += 1
+
+ if num_tests == 0:
+ print "No tests to run."
+ return 0
+
+ print(">>> Collection phase")
+ progress_indicator = progress.PROGRESS_INDICATORS[options.progress]()
+ runner = execution.Runner(suites, progress_indicator, ctx)
+
+ exit_code = runner.Run(options.j)
+
+ print(">>> Analysis phase")
+ num_tests = 0
+ test_id = 0
+ for s in suites:
+ test_results = {}
+ for t in s.tests:
+ for line in runner.outputs[t].stdout.splitlines():
+ if line.startswith("=== Stress deopt counter: "):
+ test_results[t.path] = MAX_DEOPT - int(line.split(" ")[-1])
+ for t in s.tests:
+ if t.path not in test_results:
+ print "Missing results for %s" % t.path
+ if options.dump_results_file:
+ results_dict = dict((t.path, n) for (t, n) in test_results.iteritems())
+ with file("%s.%d.txt" % (options.dump_results_file, time.time()),
+ "w") as f:
+ f.write(json.dumps(results_dict))
+
+ # Reset tests and redistribute the prototypes from the collection phase.
+ s.tests = []
+ if options.verbose:
+ print "Test distributions:"
+ for t in test_backup[s]:
+ max_deopt = test_results.get(t.path, 0)
+ if max_deopt == 0:
+ continue
+ n_deopt = self._calculate_n_tests(max_deopt, options)
+ distribution = dist.Distribute(n_deopt, max_deopt)
+ if options.verbose:
+ print "%s %s" % (t.path, distribution)
+ for n, d in enumerate(distribution):
+ fuzzing_flags = ["--deopt-every-n-times", "%d" % d]
+ s.tests.append(t.create_variant(t.variant, fuzzing_flags, n))
+ num_tests += len(s.tests)
+ for t in s.tests:
+ t.id = test_id
+ t.cmd = t.get_command(ctx)
+ test_id += 1
+
+ if num_tests == 0:
+ print "No tests to run."
+ return exit_code
+
+ print(">>> Deopt fuzzing phase (%d test cases)" % num_tests)
+ progress_indicator = progress.PROGRESS_INDICATORS[options.progress]()
+ runner = execution.Runner(suites, progress_indicator, ctx)
+
+ code = runner.Run(options.j)
+ return exit_code or code
+
+
+if __name__ == '__main__':
+ sys.exit(DeoptFuzzer().execute())
diff --git a/src/v8/tools/testrunner/gc_fuzzer.py b/src/v8/tools/testrunner/gc_fuzzer.py
new file mode 100755
index 0000000..18be227
--- /dev/null
+++ b/src/v8/tools/testrunner/gc_fuzzer.py
@@ -0,0 +1,280 @@
+#!/usr/bin/env python
+#
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+
+from os.path import join
+import itertools
+import json
+import math
+import multiprocessing
+import os
+import random
+import shlex
+import sys
+import time
+
+# Adds testrunner to the path hence it has to be imported at the beggining.
+import base_runner
+
+from testrunner.local import execution
+from testrunner.local import progress
+from testrunner.local import testsuite
+from testrunner.local import utils
+from testrunner.local import verbose
+from testrunner.objects import context
+
+
+DEFAULT_SUITES = ["mjsunit", "webkit", "benchmarks"]
+TIMEOUT_DEFAULT = 60
+
+# Double the timeout for these:
+SLOW_ARCHS = ["arm",
+ "mipsel"]
+
+
+class GCFuzzer(base_runner.BaseTestRunner):
+ def __init__(self, *args, **kwargs):
+ super(GCFuzzer, self).__init__(*args, **kwargs)
+
+ self.fuzzer_rng = None
+
+ def _add_parser_options(self, parser):
+ parser.add_option("--command-prefix",
+ help="Prepended to each shell command used to run a test",
+ default="")
+ parser.add_option("--coverage", help=("Exponential test coverage "
+ "(range 0.0, 1.0) - 0.0: one test, 1.0 all tests (slow)"),
+ default=0.4, type="float")
+ parser.add_option("--coverage-lift", help=("Lifts test coverage for tests "
+ "with a low memory size reached (range 0, inf)"),
+ default=20, type="int")
+ parser.add_option("--dump-results-file", help="Dump maximum limit reached")
+ parser.add_option("--extra-flags",
+ help="Additional flags to pass to each test command",
+ default="")
+ parser.add_option("--isolates", help="Whether to test isolates",
+ default=False, action="store_true")
+ parser.add_option("-j", help="The number of parallel tasks to run",
+ default=0, type="int")
+ parser.add_option("-p", "--progress",
+ help=("The style of progress indicator"
+ " (verbose, dots, color, mono)"),
+ choices=progress.PROGRESS_INDICATORS.keys(),
+ default="mono")
+ parser.add_option("-t", "--timeout", help="Timeout in seconds",
+ default= -1, type="int")
+ parser.add_option("--random-seed", default=0,
+ help="Default seed for initializing random generator")
+ parser.add_option("--fuzzer-random-seed", default=0,
+ help="Default seed for initializing fuzzer random "
+ "generator")
+ parser.add_option("--stress-compaction", default=False, action="store_true",
+ help="Enable stress_compaction_percentage flag")
+
+ parser.add_option("--distribution-factor1", help="DEPRECATED")
+ parser.add_option("--distribution-factor2", help="DEPRECATED")
+ parser.add_option("--distribution-mode", help="DEPRECATED")
+ parser.add_option("--seed", help="DEPRECATED")
+ return parser
+
+
+ def _process_options(self, options):
+ # Special processing of other options, sorted alphabetically.
+ options.command_prefix = shlex.split(options.command_prefix)
+ options.extra_flags = shlex.split(options.extra_flags)
+ if options.j == 0:
+ options.j = multiprocessing.cpu_count()
+ while options.random_seed == 0:
+ options.random_seed = random.SystemRandom().randint(-2147483648,
+ 2147483647)
+ while options.fuzzer_random_seed == 0:
+ options.fuzzer_random_seed = random.SystemRandom().randint(-2147483648,
+ 2147483647)
+ self.fuzzer_rng = random.Random(options.fuzzer_random_seed)
+ return True
+
+ def _calculate_n_tests(self, m, options):
+ """Calculates the number of tests from m points with exponential coverage.
+ The coverage is expected to be between 0.0 and 1.0.
+ The 'coverage lift' lifts the coverage for tests with smaller m values.
+ """
+ c = float(options.coverage)
+ l = float(options.coverage_lift)
+ return int(math.pow(m, (m * c + l) / (m + l)))
+
+ def _get_default_suite_names(self):
+ return DEFAULT_SUITES
+
+ def _do_execute(self, suites, args, options):
+ print(">>> Running tests for %s.%s" % (self.build_config.arch,
+ self.mode_name))
+
+ # Populate context object.
+ timeout = options.timeout
+ if timeout == -1:
+ # Simulators are slow, therefore allow a longer default timeout.
+ if self.build_config.arch in SLOW_ARCHS:
+ timeout = 2 * TIMEOUT_DEFAULT;
+ else:
+ timeout = TIMEOUT_DEFAULT;
+
+ timeout *= self.mode_options.timeout_scalefactor
+ ctx = context.Context(self.build_config.arch,
+ self.mode_options.execution_mode,
+ self.outdir,
+ self.mode_options.flags, options.verbose,
+ timeout, options.isolates,
+ options.command_prefix,
+ options.extra_flags,
+ False, # Keep i18n on by default.
+ options.random_seed,
+ True, # No sorting of test cases.
+ 0, # Don't rerun failing tests.
+ 0, # No use of a rerun-failing-tests maximum.
+ False, # No no_harness mode.
+ False, # Don't use perf data.
+ False) # Coverage not supported.
+
+ num_tests = self._load_tests(args, options, suites, ctx)
+ if num_tests == 0:
+ print "No tests to run."
+ return 0
+
+ test_backup = dict(map(lambda s: (s, s.tests), suites))
+
+ print('>>> Collection phase')
+ for s in suites:
+ analysis_flags = ['--fuzzer-gc-analysis']
+ s.tests = map(lambda t: t.create_variant(t.variant, analysis_flags,
+ 'analysis'),
+ s.tests)
+ for t in s.tests:
+ t.cmd = t.get_command(ctx)
+
+ progress_indicator = progress.PROGRESS_INDICATORS[options.progress]()
+ runner = execution.Runner(suites, progress_indicator, ctx)
+ exit_code = runner.Run(options.j)
+
+ print('>>> Analysis phase')
+ test_results = dict()
+ for s in suites:
+ for t in s.tests:
+ # Skip failed tests.
+ if t.output_proc.has_unexpected_output(runner.outputs[t]):
+ print '%s failed, skipping' % t.path
+ continue
+ max_limit = self._get_max_limit_reached(runner.outputs[t])
+ if max_limit:
+ test_results[t.path] = max_limit
+
+ runner = None
+
+ if options.dump_results_file:
+ with file("%s.%d.txt" % (options.dump_results_file, time.time()),
+ "w") as f:
+ f.write(json.dumps(test_results))
+
+ num_tests = 0
+ for s in suites:
+ s.tests = []
+ for t in test_backup[s]:
+ max_percent = test_results.get(t.path, 0)
+ if not max_percent or max_percent < 1.0:
+ continue
+ max_percent = int(max_percent)
+
+ subtests_count = self._calculate_n_tests(max_percent, options)
+
+ if options.verbose:
+ print ('%s [x%d] (max marking limit=%.02f)' %
+ (t.path, subtests_count, max_percent))
+ for i in xrange(0, subtests_count):
+ fuzzer_seed = self._next_fuzzer_seed()
+ fuzzing_flags = [
+ '--stress_marking', str(max_percent),
+ '--fuzzer_random_seed', str(fuzzer_seed),
+ ]
+ if options.stress_compaction:
+ fuzzing_flags.append('--stress_compaction_random')
+ s.tests.append(t.create_variant(t.variant, fuzzing_flags, i))
+ for t in s.tests:
+ t.cmd = t.get_command(ctx)
+ num_tests += len(s.tests)
+
+ if num_tests == 0:
+ print "No tests to run."
+ return exit_code
+
+ print(">>> Fuzzing phase (%d test cases)" % num_tests)
+ progress_indicator = progress.PROGRESS_INDICATORS[options.progress]()
+ runner = execution.Runner(suites, progress_indicator, ctx)
+
+ return runner.Run(options.j) or exit_code
+
+ def _load_tests(self, args, options, suites, ctx):
+ # Find available test suites and read test cases from them.
+ variables = {
+ "arch": self.build_config.arch,
+ "asan": self.build_config.asan,
+ "byteorder": sys.byteorder,
+ "dcheck_always_on": self.build_config.dcheck_always_on,
+ "deopt_fuzzer": False,
+ "gc_fuzzer": True,
+ "gc_stress": False,
+ "gcov_coverage": self.build_config.gcov_coverage,
+ "isolates": options.isolates,
+ "mode": self.mode_options.status_mode,
+ "msan": self.build_config.msan,
+ "no_harness": False,
+ "no_i18n": self.build_config.no_i18n,
+ "no_snap": self.build_config.no_snap,
+ "novfp3": False,
+ "predictable": self.build_config.predictable,
+ "simulator": utils.UseSimulator(self.build_config.arch),
+ "simulator_run": False,
+ "system": utils.GuessOS(),
+ "tsan": self.build_config.tsan,
+ "ubsan_vptr": self.build_config.ubsan_vptr,
+ }
+
+ num_tests = 0
+ test_id = 0
+ for s in suites:
+ s.ReadStatusFile(variables)
+ s.ReadTestCases(ctx)
+ if len(args) > 0:
+ s.FilterTestCasesByArgs(args)
+ s.FilterTestCasesByStatus(False)
+
+ num_tests += len(s.tests)
+ for t in s.tests:
+ t.id = test_id
+ test_id += 1
+
+ return num_tests
+
+ # Parses test stdout and returns what was the highest reached percent of the
+ # incremental marking limit (0-100).
+ @staticmethod
+ def _get_max_limit_reached(output):
+ if not output.stdout:
+ return None
+
+ for l in reversed(output.stdout.splitlines()):
+ if l.startswith('### Maximum marking limit reached ='):
+ return float(l.split()[6])
+
+ return None
+
+ def _next_fuzzer_seed(self):
+ fuzzer_seed = None
+ while not fuzzer_seed:
+ fuzzer_seed = self.fuzzer_rng.randint(-2147483648, 2147483647)
+ return fuzzer_seed
+
+
+if __name__ == '__main__':
+ sys.exit(GCFuzzer().execute())
diff --git a/src/v8/tools/testrunner/local/command.py b/src/v8/tools/testrunner/local/command.py
new file mode 100644
index 0000000..93b1ac9
--- /dev/null
+++ b/src/v8/tools/testrunner/local/command.py
@@ -0,0 +1,171 @@
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+
+import os
+import subprocess
+import sys
+import threading
+import time
+
+from ..local import utils
+from ..objects import output
+
+
+SEM_INVALID_VALUE = -1
+SEM_NOGPFAULTERRORBOX = 0x0002 # Microsoft Platform SDK WinBase.h
+
+
+class BaseCommand(object):
+ def __init__(self, shell, args=None, cmd_prefix=None, timeout=60, env=None,
+ verbose=False):
+ assert(timeout > 0)
+
+ self.shell = shell
+ self.args = args or []
+ self.cmd_prefix = cmd_prefix or []
+ self.timeout = timeout
+ self.env = env or {}
+ self.verbose = verbose
+
+ def execute(self, **additional_popen_kwargs):
+ if self.verbose:
+ print '# %s' % self
+
+ process = self._start_process(**additional_popen_kwargs)
+
+ # Variable to communicate with the timer.
+ timeout_occured = [False]
+ timer = threading.Timer(
+ self.timeout, self._on_timeout, [process, timeout_occured])
+ timer.start()
+
+ start_time = time.time()
+ stdout, stderr = process.communicate()
+ duration = time.time() - start_time
+
+ timer.cancel()
+
+ return output.Output(
+ process.returncode,
+ timeout_occured[0],
+ stdout.decode('utf-8', 'replace').encode('utf-8'),
+ stderr.decode('utf-8', 'replace').encode('utf-8'),
+ process.pid,
+ duration
+ )
+
+ def _start_process(self, **additional_popen_kwargs):
+ try:
+ return subprocess.Popen(
+ args=self._get_popen_args(),
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ env=self._get_env(),
+ **additional_popen_kwargs
+ )
+ except Exception as e:
+ sys.stderr.write('Error executing: %s\n' % self)
+ raise e
+
+ def _get_popen_args(self):
+ return self._to_args_list()
+
+ def _get_env(self):
+ env = os.environ.copy()
+ env.update(self.env)
+ # GTest shard information is read by the V8 tests runner. Make sure it
+ # doesn't leak into the execution of gtests we're wrapping. Those might
+ # otherwise apply a second level of sharding and as a result skip tests.
+ env.pop('GTEST_TOTAL_SHARDS', None)
+ env.pop('GTEST_SHARD_INDEX', None)
+ return env
+
+ def _kill_process(self, process):
+ raise NotImplementedError()
+
+ def _on_timeout(self, process, timeout_occured):
+ timeout_occured[0] = True
+ try:
+ self._kill_process(process)
+ except OSError:
+ sys.stderr.write('Error: Process %s already ended.\n' % process.pid)
+
+ def __str__(self):
+ return self.to_string()
+
+ def to_string(self, relative=False):
+ def escape(part):
+ # Escape spaces. We may need to escape more characters for this to work
+ # properly.
+ if ' ' in part:
+ return '"%s"' % part
+ return part
+
+ parts = map(escape, self._to_args_list())
+ cmd = ' '.join(parts)
+ if relative:
+ cmd = cmd.replace(os.getcwd() + os.sep, '')
+ return cmd
+
+ def _to_args_list(self):
+ return self.cmd_prefix + [self.shell] + self.args
+
+
+class PosixCommand(BaseCommand):
+ def _kill_process(self, process):
+ process.kill()
+
+
+class WindowsCommand(BaseCommand):
+ def _start_process(self, **kwargs):
+ # Try to change the error mode to avoid dialogs on fatal errors. Don't
+ # touch any existing error mode flags by merging the existing error mode.
+ # See http://blogs.msdn.com/oldnewthing/archive/2004/07/27/198410.aspx.
+ def set_error_mode(mode):
+ prev_error_mode = SEM_INVALID_VALUE
+ try:
+ import ctypes
+ prev_error_mode = (
+ ctypes.windll.kernel32.SetErrorMode(mode)) #@UndefinedVariable
+ except ImportError:
+ pass
+ return prev_error_mode
+
+ error_mode = SEM_NOGPFAULTERRORBOX
+ prev_error_mode = set_error_mode(error_mode)
+ set_error_mode(error_mode | prev_error_mode)
+
+ try:
+ return super(WindowsCommand, self)._start_process(**kwargs)
+ finally:
+ if prev_error_mode != SEM_INVALID_VALUE:
+ set_error_mode(prev_error_mode)
+
+ def _get_popen_args(self):
+ return subprocess.list2cmdline(self._to_args_list())
+
+ def _kill_process(self, process):
+ if self.verbose:
+ print 'Attempting to kill process %d' % process.pid
+ sys.stdout.flush()
+ tk = subprocess.Popen(
+ 'taskkill /T /F /PID %d' % process.pid,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )
+ stdout, stderr = tk.communicate()
+ if self.verbose:
+ print 'Taskkill results for %d' % process.pid
+ print stdout
+ print stderr
+ print 'Return code: %d' % tk.returncode
+ sys.stdout.flush()
+
+
+# Set the Command class to the OS-specific version.
+if utils.IsWindows():
+ Command = WindowsCommand
+else:
+ Command = PosixCommand
diff --git a/src/v8/tools/testrunner/local/commands.py b/src/v8/tools/testrunner/local/commands.py
deleted file mode 100644
index b2dc74e..0000000
--- a/src/v8/tools/testrunner/local/commands.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import os
-import subprocess
-import sys
-from threading import Timer
-
-from ..local import utils
-from ..objects import output
-
-
-SEM_INVALID_VALUE = -1
-SEM_NOGPFAULTERRORBOX = 0x0002 # Microsoft Platform SDK WinBase.h
-
-
-def Win32SetErrorMode(mode):
- prev_error_mode = SEM_INVALID_VALUE
- try:
- import ctypes
- prev_error_mode = \
- ctypes.windll.kernel32.SetErrorMode(mode) #@UndefinedVariable
- except ImportError:
- pass
- return prev_error_mode
-
-
-def RunProcess(verbose, timeout, args, additional_env, **rest):
- if verbose: print "#", " ".join(args)
- popen_args = args
- prev_error_mode = SEM_INVALID_VALUE
- if utils.IsWindows():
- popen_args = subprocess.list2cmdline(args)
- # Try to change the error mode to avoid dialogs on fatal errors. Don't
- # touch any existing error mode flags by merging the existing error mode.
- # See http://blogs.msdn.com/oldnewthing/archive/2004/07/27/198410.aspx.
- error_mode = SEM_NOGPFAULTERRORBOX
- prev_error_mode = Win32SetErrorMode(error_mode)
- Win32SetErrorMode(error_mode | prev_error_mode)
-
- env = os.environ.copy()
- env.update(additional_env)
- # GTest shard information is read by the V8 tests runner. Make sure it
- # doesn't leak into the execution of gtests we're wrapping. Those might
- # otherwise apply a second level of sharding and as a result skip tests.
- env.pop('GTEST_TOTAL_SHARDS', None)
- env.pop('GTEST_SHARD_INDEX', None)
-
- try:
- process = subprocess.Popen(
- args=popen_args,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- env=env,
- **rest
- )
- except Exception as e:
- sys.stderr.write("Error executing: %s\n" % popen_args)
- raise e
-
- if (utils.IsWindows() and prev_error_mode != SEM_INVALID_VALUE):
- Win32SetErrorMode(prev_error_mode)
-
- def kill_process(process, timeout_result):
- timeout_result[0] = True
- try:
- if utils.IsWindows():
- if verbose:
- print "Attempting to kill process %d" % process.pid
- sys.stdout.flush()
- tk = subprocess.Popen(
- 'taskkill /T /F /PID %d' % process.pid,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- )
- stdout, stderr = tk.communicate()
- if verbose:
- print "Taskkill results for %d" % process.pid
- print stdout
- print stderr
- print "Return code: %d" % tk.returncode
- sys.stdout.flush()
- else:
- process.kill()
- except OSError:
- sys.stderr.write('Error: Process %s already ended.\n' % process.pid)
-
- # Pseudo object to communicate with timer thread.
- timeout_result = [False]
-
- timer = Timer(timeout, kill_process, [process, timeout_result])
- timer.start()
- stdout, stderr = process.communicate()
- timer.cancel()
-
- return output.Output(
- process.returncode,
- timeout_result[0],
- stdout.decode('utf-8', 'replace').encode('utf-8'),
- stderr.decode('utf-8', 'replace').encode('utf-8'),
- process.pid,
- )
-
-
-def Execute(args, verbose=False, timeout=None, env=None):
- args = [ c for c in args if c != "" ]
- return RunProcess(verbose, timeout, args, env or {})
diff --git a/src/v8/tools/testrunner/local/execution.py b/src/v8/tools/testrunner/local/execution.py
index dc55129..d6d0725 100644
--- a/src/v8/tools/testrunner/local/execution.py
+++ b/src/v8/tools/testrunner/local/execution.py
@@ -31,15 +31,14 @@
import re
import shutil
import sys
-import time
+import traceback
-from pool import Pool
-from . import commands
+from . import command
from . import perfdata
from . import statusfile
-from . import testsuite
from . import utils
-from ..objects import output
+from . pool import Pool
+from ..objects import predictable
# Base dir of the v8 checkout.
@@ -48,72 +47,22 @@
TEST_DIR = os.path.join(BASE_DIR, "test")
-class Instructions(object):
- def __init__(self, command, test_id, timeout, verbose, env):
- self.command = command
- self.id = test_id
- self.timeout = timeout
- self.verbose = verbose
- self.env = env
-
-
# Structure that keeps global information per worker process.
ProcessContext = collections.namedtuple(
- "process_context", ["suites", "context"])
+ 'process_context', ['sancov_dir'])
-def MakeProcessContext(context, suite_names):
- """Generate a process-local context.
+TestJobResult = collections.namedtuple(
+ 'TestJobResult', ['id', 'outproc_result'])
- This reloads all suites per process and stores the global context.
-
- Args:
- context: The global context from the test runner.
- suite_names (list of str): Suite names as loaded by the parent process.
- Load the same suites in each subprocess.
- """
- suites = {}
- for root in suite_names:
- # Don't reinitialize global state as this is concurrently called from
- # different processes.
- suite = testsuite.TestSuite.LoadTestSuite(
- os.path.join(TEST_DIR, root), global_init=False)
- if suite:
- suites[suite.name] = suite
- return ProcessContext(suites, context)
+def MakeProcessContext(sancov_dir):
+ return ProcessContext(sancov_dir)
-def GetCommand(test, context):
- d8testflag = []
- shell = test.shell()
- if shell == "d8":
- d8testflag = ["--test"]
- if utils.IsWindows():
- shell += ".exe"
- if context.random_seed:
- d8testflag += ["--random-seed=%s" % context.random_seed]
- cmd = (context.command_prefix +
- [os.path.abspath(os.path.join(context.shell_dir, shell))] +
- d8testflag +
- test.suite.GetFlagsForTestCase(test, context) +
- context.extra_flags)
- return cmd
-
-
-def _GetInstructions(test, context):
- command = GetCommand(test, context)
- timeout = context.timeout
- if ("--stress-opt" in test.flags or
- "--stress-opt" in context.mode_flags or
- "--stress-opt" in context.extra_flags):
- timeout *= 4
- if "--noenable-vfp3" in context.extra_flags:
- timeout *= 2
- # FIXME(machenbach): Make this more OO. Don't expose default outcomes or
- # the like.
- if statusfile.IsSlow(test.outcomes or [statusfile.PASS]):
- timeout *= 2
- return Instructions(command, test.id, timeout, context.verbose, test.env)
+# Global function for multiprocessing, because pickling a static method doesn't
+# work on Windows.
+def run_job(job, process_context):
+ return job.run(process_context)
class Job(object):
@@ -122,31 +71,18 @@
All contained fields will be pickled/unpickled.
"""
- def Run(self, process_context):
- """Executes the job.
-
- Args:
- process_context: Process-local information that is initialized by the
- executing worker.
- """
+ def run(self, process_context):
raise NotImplementedError()
-def SetupProblem(exception, test):
- stderr = ">>> EXCEPTION: %s\n" % exception
- match = re.match(r"^.*No such file or directory: '(.*)'$", str(exception))
- if match:
- # Extra debuging information when files are claimed missing.
- f = match.group(1)
- stderr += ">>> File %s exists? -> %s\n" % (f, os.path.exists(f))
- return test.id, output.Output(1, False, "", stderr, None), 0
-
-
class TestJob(Job):
- def __init__(self, test):
- self.test = test
+ def __init__(self, test_id, cmd, outproc, run_num):
+ self.test_id = test_id
+ self.cmd = cmd
+ self.outproc = outproc
+ self.run_num = run_num
- def _rename_coverage_data(self, output, context):
+ def _rename_coverage_data(self, out, sancov_dir):
"""Rename coverage data.
Rename files with PIDs to files with unique test IDs, because the number
@@ -155,62 +91,53 @@
42 is the test ID and 1 is the attempt (the same test might be rerun on
failures).
"""
- if context.sancov_dir and output.pid is not None:
- sancov_file = os.path.join(
- context.sancov_dir, "%s.%d.sancov" % (self.test.shell(), output.pid))
+ if sancov_dir and out.pid is not None:
+ # Doesn't work on windows so basename is sufficient to get the shell name.
+ shell = os.path.basename(self.cmd.shell)
+ sancov_file = os.path.join(sancov_dir, "%s.%d.sancov" % (shell, out.pid))
# Some tests are expected to fail and don't produce coverage data.
if os.path.exists(sancov_file):
parts = sancov_file.split(".")
new_sancov_file = ".".join(
parts[:-2] +
- ["test", str(self.test.id), str(self.test.run)] +
+ ["test", str(self.test_id), str(self.run_num)] +
parts[-1:]
)
assert not os.path.exists(new_sancov_file)
os.rename(sancov_file, new_sancov_file)
- def Run(self, process_context):
- try:
- # Retrieve a new suite object on the worker-process side. The original
- # suite object isn't pickled.
- self.test.SetSuiteObject(process_context.suites)
- instr = _GetInstructions(self.test, process_context.context)
- except Exception, e:
- return SetupProblem(e, self.test)
-
- start_time = time.time()
- output = commands.Execute(instr.command, instr.verbose, instr.timeout,
- instr.env)
- self._rename_coverage_data(output, process_context.context)
- return (instr.id, output, time.time() - start_time)
-
-
-def RunTest(job, process_context):
- return job.Run(process_context)
+ def run(self, context):
+ output = self.cmd.execute()
+ self._rename_coverage_data(output, context.sancov_dir)
+ return TestJobResult(self.test_id, self.outproc.process(output))
class Runner(object):
- def __init__(self, suites, progress_indicator, context):
+ def __init__(self, suites, progress_indicator, context, outproc_factory=None):
self.datapath = os.path.join("out", "testrunner_data")
self.perf_data_manager = perfdata.GetPerfDataManager(
context, self.datapath)
self.perfdata = self.perf_data_manager.GetStore(context.arch, context.mode)
self.perf_failures = False
self.printed_allocations = False
+ self.outproc_factory = outproc_factory or (lambda test: test.output_proc)
self.tests = [t for s in suites for t in s.tests]
+
+ # TODO(majeski): Pass dynamically instead of keeping them in the runner.
+ # Maybe some observer?
+ self.outputs = {t: None for t in self.tests}
+
self.suite_names = [s.name for s in suites]
# Always pre-sort by status file, slowest tests first.
- slow_key = lambda t: statusfile.IsSlow(t.outcomes)
- self.tests.sort(key=slow_key, reverse=True)
+ self.tests.sort(key=lambda t: t.is_slow, reverse=True)
- # Sort by stored duration of not opted out.
+ # Sort by stored duration if not opted out.
if not context.no_sorting:
- for t in self.tests:
- t.duration = self.perfdata.FetchPerfData(t) or 1.0
- self.tests.sort(key=lambda t: t.duration, reverse=True)
+ self.tests.sort(key=lambda t: self.perfdata.FetchPerfData(t) or 1.0,
+ reverse=True)
self._CommonInit(suites, progress_indicator, context)
@@ -236,7 +163,7 @@
print("PerfData exception: %s" % e)
self.perf_failures = True
- def _MaybeRerun(self, pool, test):
+ def _MaybeRerun(self, pool, test, result):
if test.run <= self.context.rerun_failures_count:
# Possibly rerun this test if its run count is below the maximum per
# test. <= as the flag controls reruns not including the first run.
@@ -248,25 +175,25 @@
# Don't rerun this if the overall number of rerun tests has been
# reached.
return
- if test.run >= 2 and test.duration > self.context.timeout / 20.0:
+ if (test.run >= 2 and
+ result.output.duration > self.context.timeout / 20.0):
# Rerun slow tests at most once.
return
# Rerun this test.
- test.duration = None
- test.output = None
test.run += 1
- pool.add([TestJob(test)])
+ pool.add([
+ TestJob(test.id, test.cmd, self.outproc_factory(test), test.run)
+ ])
self.remaining += 1
self.total += 1
- def _ProcessTestNormal(self, test, result, pool):
- test.output = result[1]
- test.duration = result[2]
- has_unexpected_output = test.suite.HasUnexpectedOutput(test)
+ def _ProcessTest(self, test, result, pool):
+ self.outputs[test] = result.output
+ has_unexpected_output = result.has_unexpected_output
if has_unexpected_output:
self.failed.append(test)
- if test.output.HasCrashed():
+ if result.output.HasCrashed():
self.crashed += 1
else:
self.succeeded += 1
@@ -274,57 +201,15 @@
# For the indicator, everything that happens after the first run is treated
# as unexpected even if it flakily passes in order to include it in the
# output.
- self.indicator.HasRun(test, has_unexpected_output or test.run > 1)
+ self.indicator.HasRun(test, result.output,
+ has_unexpected_output or test.run > 1)
if has_unexpected_output:
# Rerun test failures after the indicator has processed the results.
self._VerbosePrint("Attempting to rerun test after failure.")
- self._MaybeRerun(pool, test)
+ self._MaybeRerun(pool, test, result)
# Update the perf database if the test succeeded.
return not has_unexpected_output
- def _ProcessTestPredictable(self, test, result, pool):
- def HasDifferentAllocations(output1, output2):
- def AllocationStr(stdout):
- for line in reversed((stdout or "").splitlines()):
- if line.startswith("### Allocations = "):
- self.printed_allocations = True
- return line
- return ""
- return (AllocationStr(output1.stdout) != AllocationStr(output2.stdout))
-
- # Always pass the test duration for the database update.
- test.duration = result[2]
- if test.run == 1 and result[1].HasTimedOut():
- # If we get a timeout in the first run, we are already in an
- # unpredictable state. Just report it as a failure and don't rerun.
- test.output = result[1]
- self.remaining -= 1
- self.failed.append(test)
- self.indicator.HasRun(test, True)
- if test.run > 1 and HasDifferentAllocations(test.output, result[1]):
- # From the second run on, check for different allocations. If a
- # difference is found, call the indicator twice to report both tests.
- # All runs of each test are counted as one for the statistic.
- self.remaining -= 1
- self.failed.append(test)
- self.indicator.HasRun(test, True)
- test.output = result[1]
- self.indicator.HasRun(test, True)
- elif test.run >= 3:
- # No difference on the third run -> report a success.
- self.remaining -= 1
- self.succeeded += 1
- test.output = result[1]
- self.indicator.HasRun(test, False)
- else:
- # No difference yet and less than three runs -> add another run and
- # remember the output for comparison.
- test.run += 1
- test.output = result[1]
- pool.add([TestJob(test)])
- # Always update the perf database.
- return True
-
def Run(self, jobs):
self.indicator.Starting()
self._RunInternal(jobs)
@@ -344,50 +229,54 @@
assert test.id >= 0
test_map[test.id] = test
try:
- yield [TestJob(test)]
+ yield [
+ TestJob(test.id, test.cmd, self.outproc_factory(test), test.run)
+ ]
except Exception, e:
# If this failed, save the exception and re-raise it later (after
# all other tests have had a chance to run).
- queued_exception[0] = e
+ queued_exception[0] = e, traceback.format_exc()
continue
try:
it = pool.imap_unordered(
- fn=RunTest,
+ fn=run_job,
gen=gen_tests(),
process_context_fn=MakeProcessContext,
- process_context_args=[self.context, self.suite_names],
+ process_context_args=[self.context.sancov_dir],
)
for result in it:
if result.heartbeat:
self.indicator.Heartbeat()
continue
- test = test_map[result.value[0]]
- if self.context.predictable:
- update_perf = self._ProcessTestPredictable(test, result.value, pool)
- else:
- update_perf = self._ProcessTestNormal(test, result.value, pool)
+
+ job_result = result.value
+ test_id = job_result.id
+ outproc_result = job_result.outproc_result
+
+ test = test_map[test_id]
+ update_perf = self._ProcessTest(test, outproc_result, pool)
if update_perf:
- self._RunPerfSafe(lambda: self.perfdata.UpdatePerfData(test))
+ self._RunPerfSafe(lambda: self.perfdata.UpdatePerfData(
+ test, outproc_result.output.duration))
+ except KeyboardInterrupt:
+ raise
+ except:
+ traceback.print_exc()
+ raise
finally:
self._VerbosePrint("Closing process pool.")
pool.terminate()
self._VerbosePrint("Closing database connection.")
- self._RunPerfSafe(lambda: self.perf_data_manager.close())
+ self._RunPerfSafe(self.perf_data_manager.close)
if self.perf_failures:
# Nuke perf data in case of failures. This might not work on windows as
# some files might still be open.
print "Deleting perf test data due to db corruption."
shutil.rmtree(self.datapath)
if queued_exception[0]:
- raise queued_exception[0]
-
- # Make sure that any allocations were printed in predictable mode (if we
- # ran any tests).
- assert (
- not self.total or
- not self.context.predictable or
- self.printed_allocations
- )
+ e, stacktrace = queued_exception[0]
+ print stacktrace
+ raise e
def _VerbosePrint(self, text):
if self.context.verbose:
@@ -397,6 +286,8 @@
class BreakNowException(Exception):
def __init__(self, value):
+ super(BreakNowException, self).__init__()
self.value = value
+
def __str__(self):
return repr(self.value)
diff --git a/src/v8/tools/testrunner/local/junit_output.py b/src/v8/tools/testrunner/local/junit_output.py
index d2748fe..52f31ec 100644
--- a/src/v8/tools/testrunner/local/junit_output.py
+++ b/src/v8/tools/testrunner/local/junit_output.py
@@ -34,9 +34,10 @@
self.root = xml.Element("testsuite")
self.root.attrib["name"] = test_suite_name
- def HasRunTest(self, test_name, test_duration, test_failure):
+ def HasRunTest(self, test_name, test_cmd, test_duration, test_failure):
testCaseElement = xml.Element("testcase")
- testCaseElement.attrib["name"] = " ".join(test_name)
+ testCaseElement.attrib["name"] = test_name
+ testCaseElement.attrib["cmd"] = test_cmd
testCaseElement.attrib["time"] = str(round(test_duration, 3))
if len(test_failure):
failureElement = xml.Element("failure")
@@ -44,5 +45,5 @@
testCaseElement.append(failureElement)
self.root.append(testCaseElement)
- def FinishAndWrite(self, file):
- xml.ElementTree(self.root).write(file, "UTF-8")
+ def FinishAndWrite(self, f):
+ xml.ElementTree(self.root).write(f, "UTF-8")
diff --git a/src/v8/tools/testrunner/local/perfdata.py b/src/v8/tools/testrunner/local/perfdata.py
index 29ebff7..4cb618b 100644
--- a/src/v8/tools/testrunner/local/perfdata.py
+++ b/src/v8/tools/testrunner/local/perfdata.py
@@ -62,22 +62,17 @@
self.database.close()
self.closed = True
- def GetKey(self, test):
- """Computes the key used to access data for the given testcase."""
- flags = "".join(test.flags)
- return str("%s.%s.%s" % (test.suitename(), test.path, flags))
-
def FetchPerfData(self, test):
"""Returns the observed duration for |test| as read from the store."""
- key = self.GetKey(test)
+ key = test.get_id()
if key in self.database:
return self.database[key].avg
return None
- def UpdatePerfData(self, test):
- """Updates the persisted value in the store with test.duration."""
- testkey = self.GetKey(test)
- self.RawUpdatePerfData(testkey, test.duration)
+ def UpdatePerfData(self, test, duration):
+ """Updates the persisted value in the store with duration."""
+ testkey = test.get_id()
+ self.RawUpdatePerfData(testkey, duration)
def RawUpdatePerfData(self, testkey, duration):
with self.lock:
@@ -121,7 +116,7 @@
class NullPerfDataStore(object):
- def UpdatePerfData(self, test):
+ def UpdatePerfData(self, test, duration):
pass
def FetchPerfData(self, test):
diff --git a/src/v8/tools/testrunner/local/pool.py b/src/v8/tools/testrunner/local/pool.py
index 99996ee..9199b62 100644
--- a/src/v8/tools/testrunner/local/pool.py
+++ b/src/v8/tools/testrunner/local/pool.py
@@ -8,6 +8,21 @@
import traceback
+def setup_testing():
+ """For testing only: Use threading under the hood instead of multiprocessing
+ to make coverage work.
+ """
+ global Queue
+ global Event
+ global Process
+ del Queue
+ del Event
+ del Process
+ from Queue import Queue
+ from threading import Event
+ from threading import Thread as Process
+
+
class NormalResult():
def __init__(self, result):
self.result = result
@@ -120,8 +135,8 @@
self.done,
process_context_fn,
process_context_args))
- self.processes.append(p)
p.start()
+ self.processes.append(p)
self.advance(gen)
while self.count > 0:
@@ -145,6 +160,11 @@
else:
yield MaybeResult.create_result(result.result)
self.advance(gen)
+ except KeyboardInterrupt:
+ raise
+ except Exception as e:
+ traceback.print_exc()
+ print(">>> EXCEPTION: %s" % e)
finally:
self.terminate()
if internal_error:
diff --git a/src/v8/tools/testrunner/local/progress.py b/src/v8/tools/testrunner/local/progress.py
index 6321cad..f6ebddf 100644
--- a/src/v8/tools/testrunner/local/progress.py
+++ b/src/v8/tools/testrunner/local/progress.py
@@ -32,12 +32,9 @@
import sys
import time
-from . import execution
from . import junit_output
from . import statusfile
-
-
-ABS_PATH_PREFIX = os.getcwd() + os.sep
+from ..testproc import progress as progress_proc
class ProgressIndicator(object):
@@ -54,33 +51,26 @@
def Done(self):
pass
- def HasRun(self, test, has_unexpected_output):
+ def HasRun(self, test, output, has_unexpected_output):
pass
def Heartbeat(self):
pass
def PrintFailureHeader(self, test):
- if test.suite.IsNegativeTest(test):
+ if test.output_proc.negative:
negative_marker = '[negative] '
else:
negative_marker = ''
print "=== %(label)s %(negative)s===" % {
- 'label': test.GetLabel(),
- 'negative': negative_marker
+ 'label': test,
+ 'negative': negative_marker,
}
- def _EscapeCommand(self, test):
- command = execution.GetCommand(test, self.runner.context)
- parts = []
- for part in command:
- if ' ' in part:
- # Escape spaces. We may need to escape more characters for this
- # to work properly.
- parts.append('"%s"' % part)
- else:
- parts.append(part)
- return " ".join(parts)
+ def ToProgressIndicatorProc(self):
+ print ('Warning: %s is not available as a processor' %
+ self.__class__.__name__)
+ return None
class IndicatorNotifier(object):
@@ -91,6 +81,9 @@
def Register(self, indicator):
self.indicators.append(indicator)
+ def ToProgressIndicatorProcs(self):
+ return [i.ToProgressIndicatorProc() for i in self.indicators]
+
# Forge all generic event-dispatching methods in IndicatorNotifier, which are
# part of the ProgressIndicator interface.
@@ -116,18 +109,19 @@
def Done(self):
print
for failed in self.runner.failed:
+ output = self.runner.outputs[failed]
self.PrintFailureHeader(failed)
- if failed.output.stderr:
+ if output.stderr:
print "--- stderr ---"
- print failed.output.stderr.strip()
- if failed.output.stdout:
+ print output.stderr.strip()
+ if output.stdout:
print "--- stdout ---"
- print failed.output.stdout.strip()
- print "Command: %s" % self._EscapeCommand(failed)
- if failed.output.HasCrashed():
- print "exit code: %d" % failed.output.exit_code
+ print output.stdout.strip()
+ print "Command: %s" % failed.cmd.to_string()
+ if output.HasCrashed():
+ print "exit code: %d" % output.exit_code
print "--- CRASHED ---"
- if failed.output.HasTimedOut():
+ if output.HasTimedOut():
print "--- TIMEOUT ---"
if len(self.runner.failed) == 0:
print "==="
@@ -144,33 +138,36 @@
class VerboseProgressIndicator(SimpleProgressIndicator):
- def HasRun(self, test, has_unexpected_output):
+ def HasRun(self, test, output, has_unexpected_output):
if has_unexpected_output:
- if test.output.HasCrashed():
+ if output.HasCrashed():
outcome = 'CRASH'
else:
outcome = 'FAIL'
else:
outcome = 'pass'
- print 'Done running %s: %s' % (test.GetLabel(), outcome)
+ print 'Done running %s: %s' % (test, outcome)
sys.stdout.flush()
def Heartbeat(self):
print 'Still working...'
sys.stdout.flush()
+ def ToProgressIndicatorProc(self):
+ return progress_proc.VerboseProgressIndicator()
+
class DotsProgressIndicator(SimpleProgressIndicator):
- def HasRun(self, test, has_unexpected_output):
+ def HasRun(self, test, output, has_unexpected_output):
total = self.runner.succeeded + len(self.runner.failed)
if (total > 1) and (total % 50 == 1):
sys.stdout.write('\n')
if has_unexpected_output:
- if test.output.HasCrashed():
+ if output.HasCrashed():
sys.stdout.write('C')
sys.stdout.flush()
- elif test.output.HasTimedOut():
+ elif output.HasTimedOut():
sys.stdout.write('T')
sys.stdout.flush()
else:
@@ -180,6 +177,9 @@
sys.stdout.write('.')
sys.stdout.flush()
+ def ToProgressIndicatorProc(self):
+ return progress_proc.DotsProgressIndicator()
+
class CompactProgressIndicator(ProgressIndicator):
"""Abstract base class for {Color,Monochrome}ProgressIndicator"""
@@ -194,22 +194,22 @@
self.PrintProgress('Done')
print "" # Line break.
- def HasRun(self, test, has_unexpected_output):
- self.PrintProgress(test.GetLabel())
+ def HasRun(self, test, output, has_unexpected_output):
+ self.PrintProgress(str(test))
if has_unexpected_output:
self.ClearLine(self.last_status_length)
self.PrintFailureHeader(test)
- stdout = test.output.stdout.strip()
+ stdout = output.stdout.strip()
if len(stdout):
print self.templates['stdout'] % stdout
- stderr = test.output.stderr.strip()
+ stderr = output.stderr.strip()
if len(stderr):
print self.templates['stderr'] % stderr
- print "Command: %s" % self._EscapeCommand(test)
- if test.output.HasCrashed():
- print "exit code: %d" % test.output.exit_code
+ print "Command: %s" % test.cmd.to_string()
+ if output.HasCrashed():
+ print "exit code: %d" % output.exit_code
print "--- CRASHED ---"
- if test.output.HasTimedOut():
+ if output.HasTimedOut():
print "--- TIMEOUT ---"
def Truncate(self, string, length):
@@ -254,6 +254,9 @@
def ClearLine(self, last_line_length):
print "\033[1K\r",
+ def ToProgressIndicatorProc(self):
+ return progress_proc.ColorProgressIndicator()
+
class MonochromeProgressIndicator(CompactProgressIndicator):
@@ -269,10 +272,15 @@
def ClearLine(self, last_line_length):
print ("\r" + (" " * last_line_length) + "\r"),
+ def ToProgressIndicatorProc(self):
+ return progress_proc.MonochromeProgressIndicator()
+
class JUnitTestProgressIndicator(ProgressIndicator):
-
def __init__(self, junitout, junittestsuite):
+ super(JUnitTestProgressIndicator, self).__init__()
+ self.junitout = junitout
+ self.juinttestsuite = junittestsuite
self.outputter = junit_output.JUnitTestOutput(junittestsuite)
if junitout:
self.outfile = open(junitout, "w")
@@ -284,29 +292,37 @@
if self.outfile != sys.stdout:
self.outfile.close()
- def HasRun(self, test, has_unexpected_output):
+ def HasRun(self, test, output, has_unexpected_output):
fail_text = ""
if has_unexpected_output:
- stdout = test.output.stdout.strip()
+ stdout = output.stdout.strip()
if len(stdout):
fail_text += "stdout:\n%s\n" % stdout
- stderr = test.output.stderr.strip()
+ stderr = output.stderr.strip()
if len(stderr):
fail_text += "stderr:\n%s\n" % stderr
- fail_text += "Command: %s" % self._EscapeCommand(test)
- if test.output.HasCrashed():
- fail_text += "exit code: %d\n--- CRASHED ---" % test.output.exit_code
- if test.output.HasTimedOut():
+ fail_text += "Command: %s" % test.cmd.to_string()
+ if output.HasCrashed():
+ fail_text += "exit code: %d\n--- CRASHED ---" % output.exit_code
+ if output.HasTimedOut():
fail_text += "--- TIMEOUT ---"
self.outputter.HasRunTest(
- [test.GetLabel()] + self.runner.context.mode_flags + test.flags,
- test.duration,
- fail_text)
+ test_name=str(test),
+ test_cmd=test.cmd.to_string(relative=True),
+ test_duration=output.duration,
+ test_failure=fail_text)
+
+ def ToProgressIndicatorProc(self):
+ if self.outfile != sys.stdout:
+ self.outfile.close()
+ return progress_proc.JUnitTestProgressIndicator(self.junitout,
+ self.junittestsuite)
class JsonTestProgressIndicator(ProgressIndicator):
def __init__(self, json_test_results, arch, mode, random_seed):
+ super(JsonTestProgressIndicator, self).__init__()
self.json_test_results = json_test_results
self.arch = arch
self.mode = mode
@@ -314,6 +330,10 @@
self.results = []
self.tests = []
+ def ToProgressIndicatorProc(self):
+ return progress_proc.JsonTestProgressIndicator(
+ self.json_test_results, self.arch, self.mode, self.random_seed)
+
def Done(self):
complete_results = []
if os.path.exists(self.json_test_results):
@@ -325,19 +345,19 @@
if self.tests:
# Get duration mean.
duration_mean = (
- sum(t.duration for t in self.tests) / float(len(self.tests)))
+ sum(duration for (_, duration) in self.tests) /
+ float(len(self.tests)))
# Sort tests by duration.
- timed_tests = [t for t in self.tests if t.duration is not None]
- timed_tests.sort(lambda a, b: cmp(b.duration, a.duration))
+ self.tests.sort(key=lambda (_, duration): duration, reverse=True)
slowest_tests = [
{
- "name": test.GetLabel(),
- "flags": test.flags,
- "command": self._EscapeCommand(test).replace(ABS_PATH_PREFIX, ""),
- "duration": test.duration,
- "marked_slow": statusfile.IsSlow(test.outcomes),
- } for test in timed_tests[:20]
+ "name": str(test),
+ "flags": test.cmd.args,
+ "command": test.cmd.to_string(relative=True),
+ "duration": duration,
+ "marked_slow": test.is_slow,
+ } for (test, duration) in self.tests[:20]
]
complete_results.append({
@@ -352,30 +372,30 @@
with open(self.json_test_results, "w") as f:
f.write(json.dumps(complete_results))
- def HasRun(self, test, has_unexpected_output):
+ def HasRun(self, test, output, has_unexpected_output):
# Buffer all tests for sorting the durations in the end.
- self.tests.append(test)
+ self.tests.append((test, output.duration))
if not has_unexpected_output:
# Omit tests that run as expected. Passing tests of reruns after failures
# will have unexpected_output to be reported here has well.
return
self.results.append({
- "name": test.GetLabel(),
- "flags": test.flags,
- "command": self._EscapeCommand(test).replace(ABS_PATH_PREFIX, ""),
+ "name": str(test),
+ "flags": test.cmd.args,
+ "command": test.cmd.to_string(relative=True),
"run": test.run,
- "stdout": test.output.stdout,
- "stderr": test.output.stderr,
- "exit_code": test.output.exit_code,
- "result": test.suite.GetOutcome(test),
- "expected": list(test.outcomes or ["PASS"]),
- "duration": test.duration,
+ "stdout": output.stdout,
+ "stderr": output.stderr,
+ "exit_code": output.exit_code,
+ "result": test.output_proc.get_outcome(output),
+ "expected": test.expected_outcomes,
+ "duration": output.duration,
# TODO(machenbach): This stores only the global random seed from the
# context and not possible overrides when using random-seed stress.
"random_seed": self.random_seed,
- "target_name": test.suite.shell(),
+ "target_name": test.get_shell(),
"variant": test.variant,
})
@@ -383,6 +403,7 @@
class FlakinessTestProgressIndicator(ProgressIndicator):
def __init__(self, json_test_results):
+ super(FlakinessTestProgressIndicator, self).__init__()
self.json_test_results = json_test_results
self.results = {}
self.summary = {
@@ -404,32 +425,23 @@
"version": 3,
}, f)
- def HasRun(self, test, has_unexpected_output):
- key = "/".join(
- sorted(flag.lstrip("-")
- for flag in self.runner.context.extra_flags + test.flags) +
- ["test", test.GetLabel()],
- )
- outcome = test.suite.GetOutcome(test)
+ def HasRun(self, test, output, has_unexpected_output):
+ key = test.get_id()
+ outcome = test.output_proc.get_outcome(output)
assert outcome in ["PASS", "FAIL", "CRASH", "TIMEOUT"]
if test.run == 1:
# First run of this test.
- expected_outcomes = ([
- expected
- for expected in (test.outcomes or ["PASS"])
- if expected in ["PASS", "FAIL", "CRASH", "TIMEOUT"]
- ] or ["PASS"])
self.results[key] = {
"actual": outcome,
- "expected": " ".join(expected_outcomes),
- "times": [test.duration],
+ "expected": " ".join(test.expected_outcomes),
+ "times": [output.duration],
}
self.summary[outcome] = self.summary[outcome] + 1
else:
# This is a rerun and a previous result exists.
result = self.results[key]
result["actual"] = "%s %s" % (result["actual"], outcome)
- result["times"].append(test.duration)
+ result["times"].append(output.duration)
PROGRESS_INDICATORS = {
diff --git a/src/v8/tools/testrunner/local/statusfile.py b/src/v8/tools/testrunner/local/statusfile.py
index 880837b..988750d 100644
--- a/src/v8/tools/testrunner/local/statusfile.py
+++ b/src/v8/tools/testrunner/local/statusfile.py
@@ -31,31 +31,28 @@
from variants import ALL_VARIANTS
from utils import Freeze
-# These outcomes can occur in a TestCase's outcomes list:
-SKIP = "SKIP"
+# Possible outcomes
FAIL = "FAIL"
PASS = "PASS"
-OKAY = "OKAY"
-TIMEOUT = "TIMEOUT"
-CRASH = "CRASH"
-SLOW = "SLOW"
-FAST_VARIANTS = "FAST_VARIANTS"
-NO_VARIANTS = "NO_VARIANTS"
-# These are just for the status files and are mapped below in DEFS:
+TIMEOUT = "TIMEOUT" # TODO(majeski): unused in status files
+CRASH = "CRASH" # TODO(majeski): unused in status files
+
+# Outcomes only for status file, need special handling
FAIL_OK = "FAIL_OK"
-PASS_OR_FAIL = "PASS_OR_FAIL"
FAIL_SLOPPY = "FAIL_SLOPPY"
+# Modifiers
+SKIP = "SKIP"
+SLOW = "SLOW"
+NO_VARIANTS = "NO_VARIANTS"
+
ALWAYS = "ALWAYS"
KEYWORDS = {}
-for key in [SKIP, FAIL, PASS, OKAY, CRASH, SLOW, FAIL_OK,
- FAST_VARIANTS, NO_VARIANTS, PASS_OR_FAIL, FAIL_SLOPPY, ALWAYS]:
+for key in [SKIP, FAIL, PASS, CRASH, SLOW, FAIL_OK, NO_VARIANTS, FAIL_SLOPPY,
+ ALWAYS]:
KEYWORDS[key] = key
-DEFS = {FAIL_OK: [FAIL, OKAY],
- PASS_OR_FAIL: [PASS, FAIL]}
-
# Support arches, modes to be written as keywords instead of strings.
VARIABLES = {ALWAYS: True}
for var in ["debug", "release", "big", "little",
@@ -69,43 +66,73 @@
for var in ALL_VARIANTS:
VARIABLES[var] = var
+class StatusFile(object):
+ def __init__(self, path, variables):
+ """
+ _rules: {variant: {test name: [rule]}}
+ _prefix_rules: {variant: {test name prefix: [rule]}}
+ """
+ with open(path) as f:
+ self._rules, self._prefix_rules = ReadStatusFile(f.read(), variables)
-def DoSkip(outcomes):
- return SKIP in outcomes
+ def get_outcomes(self, testname, variant=None):
+ """Merges variant dependent and independent rules."""
+ outcomes = frozenset()
+ for key in set([variant or '', '']):
+ rules = self._rules.get(key, {})
+ prefix_rules = self._prefix_rules.get(key, {})
-def IsSlow(outcomes):
- return SLOW in outcomes
+ if testname in rules:
+ outcomes |= rules[testname]
+ for prefix in prefix_rules:
+ if testname.startswith(prefix):
+ outcomes |= prefix_rules[prefix]
-def OnlyStandardVariant(outcomes):
- return NO_VARIANTS in outcomes
+ return outcomes
+ def warn_unused_rules(self, tests, check_variant_rules=False):
+ """Finds and prints unused rules in status file.
-def OnlyFastVariants(outcomes):
- return FAST_VARIANTS in outcomes
+ Rule X is unused when it doesn't apply to any tests, which can also mean
+ that all matching tests were skipped by another rule before evaluating X.
+ Args:
+ tests: list of pairs (testname, variant)
+ check_variant_rules: if set variant dependent rules are checked
+ """
-def IsPassOrFail(outcomes):
- return ((PASS in outcomes) and (FAIL in outcomes) and
- (not CRASH in outcomes) and (not OKAY in outcomes))
+ if check_variant_rules:
+ variants = list(ALL_VARIANTS)
+ else:
+ variants = ['']
+ used_rules = set()
+ for testname, variant in tests:
+ variant = variant or ''
-def IsFailOk(outcomes):
- return (FAIL in outcomes) and (OKAY in outcomes)
+ if testname in self._rules.get(variant, {}):
+ used_rules.add((testname, variant))
+ if SKIP in self._rules[variant][testname]:
+ continue
+ for prefix in self._prefix_rules.get(variant, {}):
+ if testname.startswith(prefix):
+ used_rules.add((prefix, variant))
+ if SKIP in self._prefix_rules[variant][prefix]:
+ break
-def _AddOutcome(result, new):
- global DEFS
- if new in DEFS:
- mapped = DEFS[new]
- if type(mapped) == list:
- for m in mapped:
- _AddOutcome(result, m)
- elif type(mapped) == str:
- _AddOutcome(result, mapped)
- else:
- result.add(new)
+ for variant in variants:
+ for rule, value in (
+ list(self._rules.get(variant, {}).iteritems()) +
+ list(self._prefix_rules.get(variant, {}).iteritems())):
+ if (rule, variant) not in used_rules:
+ if variant == '':
+ variant_desc = 'variant independent'
+ else:
+ variant_desc = 'variant: %s' % variant
+ print 'Unused rule: %s -> %s (%s)' % (rule, value, variant_desc)
def _JoinsPassAndFail(outcomes1, outcomes2):
@@ -114,13 +141,17 @@
"""
return (
PASS in outcomes1 and
- not FAIL in outcomes1 and
- FAIL in outcomes2
+ not (FAIL in outcomes1 or FAIL_OK in outcomes1) and
+ (FAIL in outcomes2 or FAIL_OK in outcomes2)
)
VARIANT_EXPRESSION = object()
def _EvalExpression(exp, variables):
+ """Evaluates expression and returns its result. In case of NameError caused by
+ undefined "variant" identifier returns VARIANT_EXPRESSION marker.
+ """
+
try:
return eval(exp, variables)
except NameError as e:
@@ -129,32 +160,35 @@
return VARIANT_EXPRESSION
-def _EvalVariantExpression(section, rules, wildcards, variant, variables):
- variables_with_variant = {}
- variables_with_variant.update(variables)
+def _EvalVariantExpression(
+ condition, section, variables, variant, rules, prefix_rules):
+ variables_with_variant = dict(variables)
variables_with_variant["variant"] = variant
- result = _EvalExpression(section[0], variables_with_variant)
+ result = _EvalExpression(condition, variables_with_variant)
assert result != VARIANT_EXPRESSION
if result is True:
_ReadSection(
- section[1],
- rules[variant],
- wildcards[variant],
+ section,
variables_with_variant,
+ rules[variant],
+ prefix_rules[variant],
)
else:
assert result is False, "Make sure expressions evaluate to boolean values"
-def _ParseOutcomeList(rule, outcomes, target_dict, variables):
+def _ParseOutcomeList(rule, outcomes, variables, target_dict):
+ """Outcome list format: [condition, outcome, outcome, ...]"""
+
result = set([])
if type(outcomes) == str:
outcomes = [outcomes]
for item in outcomes:
if type(item) == str:
- _AddOutcome(result, item)
+ result.add(item)
elif type(item) == list:
- exp = _EvalExpression(item[0], variables)
+ condition = item[0]
+ exp = _EvalExpression(condition, variables)
assert exp != VARIANT_EXPRESSION, (
"Nested variant expressions are not supported")
if exp is False:
@@ -166,10 +200,11 @@
for outcome in item[1:]:
assert type(outcome) == str
- _AddOutcome(result, outcome)
+ result.add(outcome)
else:
assert False
- if len(result) == 0: return
+ if len(result) == 0:
+ return
if rule in target_dict:
# A FAIL without PASS in one rule has always precedence over a single
# PASS (without FAIL) in another. Otherwise the default PASS expectation
@@ -186,51 +221,69 @@
def ReadContent(content):
- global KEYWORDS
return eval(content, KEYWORDS)
def ReadStatusFile(content, variables):
- # Empty defaults for rules and wildcards. Variant-independent
+ """Status file format
+ Status file := [section]
+ section = [CONDITION, section_rules]
+ section_rules := {path: outcomes}
+ outcomes := outcome | [outcome, ...]
+ outcome := SINGLE_OUTCOME | [CONDITION, SINGLE_OUTCOME, SINGLE_OUTCOME, ...]
+ """
+
+ # Empty defaults for rules and prefix_rules. Variant-independent
# rules are mapped by "", others by the variant name.
rules = {variant: {} for variant in ALL_VARIANTS}
rules[""] = {}
- wildcards = {variant: {} for variant in ALL_VARIANTS}
- wildcards[""] = {}
+ prefix_rules = {variant: {} for variant in ALL_VARIANTS}
+ prefix_rules[""] = {}
variables.update(VARIABLES)
- for section in ReadContent(content):
- assert type(section) == list
- assert len(section) == 2
- exp = _EvalExpression(section[0], variables)
+ for conditional_section in ReadContent(content):
+ assert type(conditional_section) == list
+ assert len(conditional_section) == 2
+ condition, section = conditional_section
+ exp = _EvalExpression(condition, variables)
+
+ # The expression is variant-independent and evaluates to False.
if exp is False:
- # The expression is variant-independent and evaluates to False.
continue
- elif exp == VARIANT_EXPRESSION:
+
+ # The expression is variant-independent and evaluates to True.
+ if exp is True:
+ _ReadSection(
+ section,
+ variables,
+ rules[''],
+ prefix_rules[''],
+ )
+ continue
+
+ # The expression is variant-dependent (contains "variant" keyword)
+ if exp == VARIANT_EXPRESSION:
# If the expression contains one or more "variant" keywords, we evaluate
# it for all possible variants and create rules for those that apply.
for variant in ALL_VARIANTS:
- _EvalVariantExpression(section, rules, wildcards, variant, variables)
- else:
- # The expression is variant-independent and evaluates to True.
- assert exp is True, "Make sure expressions evaluate to boolean values"
- _ReadSection(
- section[1],
- rules[""],
- wildcards[""],
- variables,
- )
- return Freeze(rules), Freeze(wildcards)
+ _EvalVariantExpression(
+ condition, section, variables, variant, rules, prefix_rules)
+ continue
+
+ assert False, "Make sure expressions evaluate to boolean values"
+
+ return Freeze(rules), Freeze(prefix_rules)
-def _ReadSection(section, rules, wildcards, variables):
+def _ReadSection(section, variables, rules, prefix_rules):
assert type(section) == dict
- for rule in section:
+ for rule, outcome_list in section.iteritems():
assert type(rule) == str
+
if rule[-1] == '*':
- _ParseOutcomeList(rule, section[rule], wildcards, variables)
+ _ParseOutcomeList(rule[:-1], outcome_list, variables, prefix_rules)
else:
- _ParseOutcomeList(rule, section[rule], rules, variables)
+ _ParseOutcomeList(rule, outcome_list, variables, rules)
JS_TEST_PATHS = {
'debugger': [[]],
@@ -266,6 +319,8 @@
"Suite name prefix must not be used in rule keys")
_assert(not rule.endswith('.js'),
".js extension must not be used in rule keys.")
+ _assert('*' not in rule or (rule.count('*') == 1 and rule[-1] == '*'),
+ "Only the last character of a rule key can be a wildcard")
if basename in JS_TEST_PATHS and '*' not in rule:
_assert(any(os.path.exists(os.path.join(os.path.dirname(path),
*(paths + [rule + ".js"])))
diff --git a/src/v8/tools/testrunner/local/statusfile_unittest.py b/src/v8/tools/testrunner/local/statusfile_unittest.py
index f64ab34..299e332 100755
--- a/src/v8/tools/testrunner/local/statusfile_unittest.py
+++ b/src/v8/tools/testrunner/local/statusfile_unittest.py
@@ -87,7 +87,7 @@
)
def test_read_statusfile_section_true(self):
- rules, wildcards = statusfile.ReadStatusFile(
+ rules, prefix_rules = statusfile.ReadStatusFile(
TEST_STATUS_FILE % 'system==linux', make_variables())
self.assertEquals(
@@ -99,15 +99,15 @@
)
self.assertEquals(
{
- 'foo/*': set(['SLOW', 'FAIL']),
+ 'foo/': set(['SLOW', 'FAIL']),
},
- wildcards[''],
+ prefix_rules[''],
)
self.assertEquals({}, rules['default'])
- self.assertEquals({}, wildcards['default'])
+ self.assertEquals({}, prefix_rules['default'])
def test_read_statusfile_section_false(self):
- rules, wildcards = statusfile.ReadStatusFile(
+ rules, prefix_rules = statusfile.ReadStatusFile(
TEST_STATUS_FILE % 'system==windows', make_variables())
self.assertEquals(
@@ -119,15 +119,15 @@
)
self.assertEquals(
{
- 'foo/*': set(['PASS', 'SLOW']),
+ 'foo/': set(['PASS', 'SLOW']),
},
- wildcards[''],
+ prefix_rules[''],
)
self.assertEquals({}, rules['default'])
- self.assertEquals({}, wildcards['default'])
+ self.assertEquals({}, prefix_rules['default'])
def test_read_statusfile_section_variant(self):
- rules, wildcards = statusfile.ReadStatusFile(
+ rules, prefix_rules = statusfile.ReadStatusFile(
TEST_STATUS_FILE % 'system==linux and variant==default',
make_variables(),
)
@@ -141,9 +141,9 @@
)
self.assertEquals(
{
- 'foo/*': set(['PASS', 'SLOW']),
+ 'foo/': set(['PASS', 'SLOW']),
},
- wildcards[''],
+ prefix_rules[''],
)
self.assertEquals(
{
@@ -153,9 +153,9 @@
)
self.assertEquals(
{
- 'foo/*': set(['FAIL']),
+ 'foo/': set(['FAIL']),
},
- wildcards['default'],
+ prefix_rules['default'],
)
diff --git a/src/v8/tools/testrunner/local/testsuite.py b/src/v8/tools/testrunner/local/testsuite.py
index 3b8f956..6a9e983 100644
--- a/src/v8/tools/testrunner/local/testsuite.py
+++ b/src/v8/tools/testrunner/local/testsuite.py
@@ -30,53 +30,65 @@
import imp
import os
-from . import commands
+from . import command
from . import statusfile
from . import utils
-from ..objects import testcase
-from variants import ALL_VARIANTS, ALL_VARIANT_FLAGS, FAST_VARIANT_FLAGS
+from ..objects.testcase import TestCase
+from variants import ALL_VARIANTS, ALL_VARIANT_FLAGS
-FAST_VARIANTS = set(["default", "turbofan"])
STANDARD_VARIANT = set(["default"])
-class VariantGenerator(object):
+class LegacyVariantsGenerator(object):
def __init__(self, suite, variants):
self.suite = suite
self.all_variants = ALL_VARIANTS & variants
- self.fast_variants = FAST_VARIANTS & variants
self.standard_variant = STANDARD_VARIANT & variants
- def FilterVariantsByTest(self, testcase):
- result = self.all_variants
- if testcase.outcomes:
- if statusfile.OnlyStandardVariant(testcase.outcomes):
- return self.standard_variant
- if statusfile.OnlyFastVariants(testcase.outcomes):
- result = self.fast_variants
- return result
+ def FilterVariantsByTest(self, test):
+ if test.only_standard_variant:
+ return self.standard_variant
+ return self.all_variants
- def GetFlagSets(self, testcase, variant):
- if testcase.outcomes and statusfile.OnlyFastVariants(testcase.outcomes):
- return FAST_VARIANT_FLAGS[variant]
- else:
- return ALL_VARIANT_FLAGS[variant]
+ def GetFlagSets(self, test, variant):
+ return ALL_VARIANT_FLAGS[variant]
+
+
+class StandardLegacyVariantsGenerator(LegacyVariantsGenerator):
+ def FilterVariantsByTest(self, testcase):
+ return self.standard_variant
+
+
+class VariantsGenerator(object):
+ def __init__(self, variants):
+ self._all_variants = [v for v in variants if v in ALL_VARIANTS]
+ self._standard_variant = [v for v in variants if v in STANDARD_VARIANT]
+
+ def gen(self, test):
+ """Generator producing (variant, flags, procid suffix) tuples."""
+ flags_set = self._get_flags_set(test)
+ for n, variant in enumerate(self._get_variants(test)):
+ yield (variant, flags_set[variant][0], n)
+
+ def _get_flags_set(self, test):
+ return ALL_VARIANT_FLAGS
+
+ def _get_variants(self, test):
+ if test.only_standard_variant:
+ return self._standard_variant
+ return self._all_variants
class TestSuite(object):
-
@staticmethod
- def LoadTestSuite(root, global_init=True):
+ def LoadTestSuite(root):
name = root.split(os.path.sep)[-1]
f = None
try:
(f, pathname, description) = imp.find_module("testcfg", [root])
module = imp.load_module(name + "_testcfg", f, pathname, description)
return module.GetSuite(name, root)
- except ImportError:
- # Use default if no testcfg is present.
- return GoogleTestSuite(name, root)
finally:
if f:
f.close()
@@ -86,154 +98,77 @@
self.name = name # string
self.root = root # string containing path
self.tests = None # list of TestCase objects
- self.rules = None # dictionary mapping test path to list of outcomes
- self.wildcards = None # dictionary mapping test paths to list of outcomes
- self.total_duration = None # float, assigned on demand
-
- def shell(self):
- return "d8"
-
- def suffix(self):
- return ".js"
+ self.statusfile = None
def status_file(self):
return "%s/%s.status" % (self.root, self.name)
- # Used in the status file and for stdout printing.
- def CommonTestName(self, testcase):
- if utils.IsWindows():
- return testcase.path.replace("\\", "/")
- else:
- return testcase.path
-
def ListTests(self, context):
raise NotImplementedError
- def _VariantGeneratorFactory(self):
+ def _LegacyVariantsGeneratorFactory(self):
"""The variant generator class to be used."""
- return VariantGenerator
+ return LegacyVariantsGenerator
- def CreateVariantGenerator(self, variants):
+ def CreateLegacyVariantsGenerator(self, variants):
"""Return a generator for the testing variants of this suite.
Args:
variants: List of variant names to be run as specified by the test
runner.
- Returns: An object of type VariantGenerator.
+ Returns: An object of type LegacyVariantsGenerator.
"""
- return self._VariantGeneratorFactory()(self, set(variants))
+ return self._LegacyVariantsGeneratorFactory()(self, set(variants))
- def PrepareSources(self):
- """Called once before multiprocessing for doing file-system operations.
+ def get_variants_gen(self, variants):
+ return self._variants_gen_class()(variants)
- This should not access the network. For network access use the method
- below.
- """
- pass
-
- def DownloadData(self):
- pass
+ def _variants_gen_class(self):
+ return VariantsGenerator
def ReadStatusFile(self, variables):
- with open(self.status_file()) as f:
- self.rules, self.wildcards = (
- statusfile.ReadStatusFile(f.read(), variables))
+ self.statusfile = statusfile.StatusFile(self.status_file(), variables)
def ReadTestCases(self, context):
self.tests = self.ListTests(context)
- @staticmethod
- def _FilterSlow(slow, mode):
- return (mode == "run" and not slow) or (mode == "skip" and slow)
- @staticmethod
- def _FilterPassFail(pass_fail, mode):
- return (mode == "run" and not pass_fail) or (mode == "skip" and pass_fail)
+ def FilterTestCasesByStatus(self,
+ slow_tests_mode=None,
+ pass_fail_tests_mode=None):
+ """Filters tests by outcomes from status file.
- def FilterTestCasesByStatus(self, warn_unused_rules,
- slow_tests="dontcare",
- pass_fail_tests="dontcare",
- variants=False):
+ Status file has to be loaded before using this function.
- # Use only variants-dependent rules and wildcards when filtering
- # respective test cases and generic rules when filtering generic test
- # cases.
- if not variants:
- rules = self.rules[""]
- wildcards = self.wildcards[""]
- else:
- # We set rules and wildcards to a variant-specific version for each test
- # below.
- rules = {}
- wildcards = {}
+ Args:
+ slow_tests_mode: What to do with slow tests.
+ pass_fail_tests_mode: What to do with pass or fail tests.
- filtered = []
+ Mode options:
+ None (default) - don't skip
+ "skip" - skip if slow/pass_fail
+ "run" - skip if not slow/pass_fail
+ """
+ def _skip_slow(is_slow, mode):
+ return (
+ (mode == 'run' and not is_slow) or
+ (mode == 'skip' and is_slow))
- # Remember used rules as tuples of (rule, variant), where variant is "" for
- # variant-independent rules.
- used_rules = set()
+ def _skip_pass_fail(pass_fail, mode):
+ return (
+ (mode == 'run' and not pass_fail) or
+ (mode == 'skip' and pass_fail))
- for t in self.tests:
- slow = False
- pass_fail = False
- testname = self.CommonTestName(t)
- variant = t.variant or ""
- if variants:
- rules = self.rules[variant]
- wildcards = self.wildcards[variant]
- if testname in rules:
- used_rules.add((testname, variant))
- # Even for skipped tests, as the TestCase object stays around and
- # PrintReport() uses it.
- t.outcomes = t.outcomes | rules[testname]
- if statusfile.DoSkip(t.outcomes):
- continue # Don't add skipped tests to |filtered|.
- for outcome in t.outcomes:
- if outcome.startswith('Flags: '):
- t.flags += outcome[7:].split()
- slow = statusfile.IsSlow(t.outcomes)
- pass_fail = statusfile.IsPassOrFail(t.outcomes)
- skip = False
- for rule in wildcards:
- assert rule[-1] == '*'
- if testname.startswith(rule[:-1]):
- used_rules.add((rule, variant))
- t.outcomes = t.outcomes | wildcards[rule]
- if statusfile.DoSkip(t.outcomes):
- skip = True
- break # "for rule in wildcards"
- slow = slow or statusfile.IsSlow(t.outcomes)
- pass_fail = pass_fail or statusfile.IsPassOrFail(t.outcomes)
- if (skip
- or self._FilterSlow(slow, slow_tests)
- or self._FilterPassFail(pass_fail, pass_fail_tests)):
- continue # "for t in self.tests"
- filtered.append(t)
- self.tests = filtered
+ def _compliant(test):
+ if test.do_skip:
+ return False
+ if _skip_slow(test.is_slow, slow_tests_mode):
+ return False
+ if _skip_pass_fail(test.is_pass_or_fail, pass_fail_tests_mode):
+ return False
+ return True
- if not warn_unused_rules:
- return
-
- if not variants:
- for rule in self.rules[""]:
- if (rule, "") not in used_rules:
- print("Unused rule: %s -> %s (variant independent)" % (
- rule, self.rules[""][rule]))
- for rule in self.wildcards[""]:
- if (rule, "") not in used_rules:
- print("Unused rule: %s -> %s (variant independent)" % (
- rule, self.wildcards[""][rule]))
- else:
- for variant in ALL_VARIANTS:
- for rule in self.rules[variant]:
- if (rule, variant) not in used_rules:
- print("Unused rule: %s -> %s (variant: %s)" % (
- rule, self.rules[variant][rule], variant))
- for rule in self.wildcards[variant]:
- if (rule, variant) not in used_rules:
- print("Unused rule: %s -> %s (variant: %s)" % (
- rule, self.wildcards[variant][rule], variant))
-
+ self.tests = filter(_compliant, self.tests)
def FilterTestCasesByArgs(self, args):
"""Filter test cases based on command-line arguments.
@@ -260,100 +195,14 @@
break
self.tests = filtered
- def GetFlagsForTestCase(self, testcase, context):
+ def _create_test(self, path, **kwargs):
+ test = self._test_class()(self, path, self._path_to_name(path), **kwargs)
+ return test
+
+ def _test_class(self):
raise NotImplementedError
- def GetSourceForTest(self, testcase):
- return "(no source available)"
-
- def IsFailureOutput(self, testcase):
- return testcase.output.exit_code != 0
-
- def IsNegativeTest(self, testcase):
- return False
-
- def HasFailed(self, testcase):
- execution_failed = self.IsFailureOutput(testcase)
- if self.IsNegativeTest(testcase):
- return not execution_failed
- else:
- return execution_failed
-
- def GetOutcome(self, testcase):
- if testcase.output.HasCrashed():
- return statusfile.CRASH
- elif testcase.output.HasTimedOut():
- return statusfile.TIMEOUT
- elif self.HasFailed(testcase):
- return statusfile.FAIL
- else:
- return statusfile.PASS
-
- def HasUnexpectedOutput(self, testcase):
- outcome = self.GetOutcome(testcase)
- return not outcome in (testcase.outcomes or [statusfile.PASS])
-
- def StripOutputForTransmit(self, testcase):
- if not self.HasUnexpectedOutput(testcase):
- testcase.output.stdout = ""
- testcase.output.stderr = ""
-
- def CalculateTotalDuration(self):
- self.total_duration = 0.0
- for t in self.tests:
- self.total_duration += t.duration
- return self.total_duration
-
-
-class StandardVariantGenerator(VariantGenerator):
- def FilterVariantsByTest(self, testcase):
- return self.standard_variant
-
-
-class GoogleTestSuite(TestSuite):
- def __init__(self, name, root):
- super(GoogleTestSuite, self).__init__(name, root)
-
- def ListTests(self, context):
- shell = os.path.abspath(os.path.join(context.shell_dir, self.shell()))
+ def _path_to_name(self, path):
if utils.IsWindows():
- shell += ".exe"
-
- output = None
- for i in xrange(3): # Try 3 times in case of errors.
- output = commands.Execute(context.command_prefix +
- [shell, "--gtest_list_tests"] +
- context.extra_flags)
- if output.exit_code == 0:
- break
- print "Test executable failed to list the tests (try %d).\n\nStdout:" % i
- print output.stdout
- print "\nStderr:"
- print output.stderr
- print "\nExit code: %d" % output.exit_code
- else:
- raise Exception("Test executable failed to list the tests.")
-
- tests = []
- test_case = ''
- for line in output.stdout.splitlines():
- test_desc = line.strip().split()[0]
- if test_desc.endswith('.'):
- test_case = test_desc
- elif test_case and test_desc:
- test = testcase.TestCase(self, test_case + test_desc)
- tests.append(test)
- tests.sort(key=lambda t: t.path)
- return tests
-
- def GetFlagsForTestCase(self, testcase, context):
- return (testcase.flags + ["--gtest_filter=" + testcase.path] +
- ["--gtest_random_seed=%s" % context.random_seed] +
- ["--gtest_print_time=0"] +
- context.mode_flags)
-
- def _VariantGeneratorFactory(self):
- return StandardVariantGenerator
-
- def shell(self):
- return self.name
+ return path.replace("\\", "/")
+ return path
diff --git a/src/v8/tools/testrunner/local/testsuite_unittest.py b/src/v8/tools/testrunner/local/testsuite_unittest.py
index 1e10ef5..efefe4c 100755
--- a/src/v8/tools/testrunner/local/testsuite_unittest.py
+++ b/src/v8/tools/testrunner/local/testsuite_unittest.py
@@ -19,46 +19,37 @@
class TestSuiteTest(unittest.TestCase):
def test_filter_testcases_by_status_first_pass(self):
suite = TestSuite('foo', 'bar')
- suite.tests = [
- TestCase(suite, 'foo/bar'),
- TestCase(suite, 'baz/bar'),
- ]
suite.rules = {
'': {
'foo/bar': set(['PASS', 'SKIP']),
'baz/bar': set(['PASS', 'FAIL']),
},
}
- suite.wildcards = {
+ suite.prefix_rules = {
'': {
- 'baz/*': set(['PASS', 'SLOW']),
+ 'baz/': set(['PASS', 'SLOW']),
},
}
- suite.FilterTestCasesByStatus(warn_unused_rules=False)
+ suite.tests = [
+ TestCase(suite, 'foo/bar', 'foo/bar'),
+ TestCase(suite, 'baz/bar', 'baz/bar'),
+ ]
+ suite.FilterTestCasesByStatus()
self.assertEquals(
- [TestCase(suite, 'baz/bar')],
+ [TestCase(suite, 'baz/bar', 'baz/bar')],
suite.tests,
)
- self.assertEquals(set(['PASS', 'FAIL', 'SLOW']), suite.tests[0].outcomes)
+ outcomes = suite.GetStatusFileOutcomes(suite.tests[0].name,
+ suite.tests[0].variant)
+ self.assertEquals(set(['PASS', 'FAIL', 'SLOW']), outcomes)
def test_filter_testcases_by_status_second_pass(self):
suite = TestSuite('foo', 'bar')
- test1 = TestCase(suite, 'foo/bar')
- test2 = TestCase(suite, 'baz/bar')
-
- # Contrived outcomes from filtering by variant-independent rules.
- test1.outcomes = set(['PREV'])
- test2.outcomes = set(['PREV'])
-
- suite.tests = [
- test1.CopyAddingFlags(variant='default', flags=[]),
- test1.CopyAddingFlags(variant='stress', flags=['-v']),
- test2.CopyAddingFlags(variant='default', flags=[]),
- test2.CopyAddingFlags(variant='stress', flags=['-v']),
- ]
-
suite.rules = {
+ '': {
+ 'foo/bar': set(['PREV']),
+ },
'default': {
'foo/bar': set(['PASS', 'SKIP']),
'baz/bar': set(['PASS', 'FAIL']),
@@ -67,32 +58,64 @@
'baz/bar': set(['SKIP']),
},
}
- suite.wildcards = {
+ suite.prefix_rules = {
+ '': {
+ 'baz/': set(['PREV']),
+ },
'default': {
- 'baz/*': set(['PASS', 'SLOW']),
+ 'baz/': set(['PASS', 'SLOW']),
},
'stress': {
- 'foo/*': set(['PASS', 'SLOW']),
+ 'foo/': set(['PASS', 'SLOW']),
},
}
- suite.FilterTestCasesByStatus(warn_unused_rules=False, variants=True)
+
+ test1 = TestCase(suite, 'foo/bar', 'foo/bar')
+ test2 = TestCase(suite, 'baz/bar', 'baz/bar')
+ suite.tests = [
+ test1.create_variant(variant='default', flags=[]),
+ test1.create_variant(variant='stress', flags=['-v']),
+ test2.create_variant(variant='default', flags=[]),
+ test2.create_variant(variant='stress', flags=['-v']),
+ ]
+
+ suite.FilterTestCasesByStatus()
self.assertEquals(
[
- TestCase(suite, 'foo/bar', flags=['-v']),
- TestCase(suite, 'baz/bar'),
+ TestCase(suite, 'foo/bar', 'foo/bar').create_variant(None, ['-v']),
+ TestCase(suite, 'baz/bar', 'baz/bar'),
],
suite.tests,
)
self.assertEquals(
- set(['PASS', 'SLOW', 'PREV']),
- suite.tests[0].outcomes,
+ set(['PREV', 'PASS', 'SLOW']),
+ suite.GetStatusFileOutcomes(suite.tests[0].name,
+ suite.tests[0].variant),
)
self.assertEquals(
- set(['PASS', 'FAIL', 'SLOW', 'PREV']),
- suite.tests[1].outcomes,
+ set(['PREV', 'PASS', 'FAIL', 'SLOW']),
+ suite.GetStatusFileOutcomes(suite.tests[1].name,
+ suite.tests[1].variant),
)
+ def test_fail_ok_outcome(self):
+ suite = TestSuite('foo', 'bar')
+ suite.rules = {
+ '': {
+ 'foo/bar': set(['FAIL_OK']),
+ 'baz/bar': set(['FAIL']),
+ },
+ }
+ suite.prefix_rules = {}
+ suite.tests = [
+ TestCase(suite, 'foo/bar', 'foo/bar'),
+ TestCase(suite, 'baz/bar', 'baz/bar'),
+ ]
+
+ for t in suite.tests:
+ self.assertEquals(['FAIL'], t.expected_outcomes)
+
if __name__ == '__main__':
unittest.main()
diff --git a/src/v8/tools/testrunner/local/utils.py b/src/v8/tools/testrunner/local/utils.py
index 3e79e44..bf8c3d9 100644
--- a/src/v8/tools/testrunner/local/utils.py
+++ b/src/v8/tools/testrunner/local/utils.py
@@ -26,10 +26,10 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-import os
from os.path import exists
from os.path import isdir
from os.path import join
+import os
import platform
import re
import subprocess
diff --git a/src/v8/tools/testrunner/local/variants.py b/src/v8/tools/testrunner/local/variants.py
index 4d274ab..f1e9ad3 100644
--- a/src/v8/tools/testrunner/local/variants.py
+++ b/src/v8/tools/testrunner/local/variants.py
@@ -4,27 +4,26 @@
# Use this to run several variants of the tests.
ALL_VARIANT_FLAGS = {
+ "code_serializer": [["--cache=code"]],
"default": [[]],
+ "future": [["--future"]],
+ # Alias of exhaustive variants, but triggering new test framework features.
+ "infra_staging": [[]],
+ "liftoff": [["--liftoff"]],
+ "minor_mc": [["--minor-mc"]],
+ # No optimization means disable all optimizations. OptimizeFunctionOnNextCall
+ # would not force optimization too. It turns into a Nop. Please see
+ # https://chromium-review.googlesource.com/c/452620/ for more discussion.
+ "nooptimization": [["--noopt"]],
+ "slow_path": [["--force-slow-path"]],
"stress": [["--stress-opt", "--always-opt"]],
- # No optimization means disable all optimizations. OptimizeFunctionOnNextCall
- # would not force optimization too. It turns into a Nop. Please see
- # https://chromium-review.googlesource.com/c/452620/ for more discussion.
- "nooptimization": [["--noopt"]],
- "stress_asm_wasm": [["--validate-asm", "--stress-validate-asm", "--suppress-asm-messages"]],
- "wasm_traps": [["--wasm_guard_pages", "--wasm_trap_handler", "--invoke-weak-callbacks"]],
+ "stress_background_compile": [["--background-compile", "--stress-background-compile"]],
+ "stress_incremental_marking": [["--stress-incremental-marking"]],
+ # Trigger stress sampling allocation profiler with sample interval = 2^14
+ "stress_sampling": [["--stress-sampling-allocation-profiler=16384"]],
+ "trusted": [["--no-untrusted-code-mitigations"]],
+ "wasm_traps": [["--wasm_trap_handler", "--invoke-weak-callbacks", "--wasm-jit-to-native"]],
+ "wasm_no_native": [["--no-wasm-jit-to-native"]],
}
-# FAST_VARIANTS implies no --always-opt.
-FAST_VARIANT_FLAGS = {
- "default": [[]],
- "stress": [["--stress-opt"]],
- # No optimization means disable all optimizations. OptimizeFunctionOnNextCall
- # would not force optimization too. It turns into a Nop. Please see
- # https://chromium-review.googlesource.com/c/452620/ for more discussion.
- "nooptimization": [["--noopt"]],
- "stress_asm_wasm": [["--validate-asm", "--stress-validate-asm", "--suppress-asm-messages"]],
- "wasm_traps": [["--wasm_guard_pages", "--wasm_trap_handler", "--invoke-weak-callbacks"]],
-}
-
-ALL_VARIANTS = set(["default", "stress", "nooptimization", "stress_asm_wasm",
- "wasm_traps"])
+ALL_VARIANTS = set(ALL_VARIANT_FLAGS.keys())
diff --git a/src/v8/tools/testrunner/local/verbose.py b/src/v8/tools/testrunner/local/verbose.py
index 00c330d..49e8085 100644
--- a/src/v8/tools/testrunner/local/verbose.py
+++ b/src/v8/tools/testrunner/local/verbose.py
@@ -35,48 +35,53 @@
REPORT_TEMPLATE = (
"""Total: %(total)i tests
* %(skipped)4d tests will be skipped
- * %(timeout)4d tests are expected to timeout sometimes
* %(nocrash)4d tests are expected to be flaky but not crash
* %(pass)4d tests are expected to pass
* %(fail_ok)4d tests are expected to fail that we won't fix
- * %(fail)4d tests are expected to fail that we should fix""")
+ * %(fail)4d tests are expected to fail that we should fix
+ * %(crash)4d tests are expected to crash
+""")
+# TODO(majeski): Turn it into an observer.
def PrintReport(tests):
total = len(tests)
- skipped = timeout = nocrash = passes = fail_ok = fail = 0
+ skipped = nocrash = passes = fail_ok = fail = crash = 0
for t in tests:
- if "outcomes" not in dir(t) or not t.outcomes:
- passes += 1
- continue
- o = t.outcomes
- if statusfile.DoSkip(o):
+ if t.do_skip:
skipped += 1
- continue
- if statusfile.TIMEOUT in o: timeout += 1
- if statusfile.IsPassOrFail(o): nocrash += 1
- if list(o) == [statusfile.PASS]: passes += 1
- if statusfile.IsFailOk(o): fail_ok += 1
- if list(o) == [statusfile.FAIL]: fail += 1
+ elif t.is_pass_or_fail:
+ nocrash += 1
+ elif t.is_fail_ok:
+ fail_ok += 1
+ elif t.expected_outcomes == [statusfile.PASS]:
+ passes += 1
+ elif t.expected_outcomes == [statusfile.FAIL]:
+ fail += 1
+ elif t.expected_outcomes == [statusfile.CRASH]:
+ crash += 1
+ else:
+ assert False # Unreachable # TODO: check this in outcomes parsing phase.
+
print REPORT_TEMPLATE % {
"total": total,
"skipped": skipped,
- "timeout": timeout,
"nocrash": nocrash,
"pass": passes,
"fail_ok": fail_ok,
- "fail": fail
+ "fail": fail,
+ "crash": crash,
}
def PrintTestSource(tests):
for test in tests:
- suite = test.suite
- source = suite.GetSourceForTest(test).strip()
- if len(source) > 0:
- print "--- begin source: %s/%s ---" % (suite.name, test.path)
- print source
- print "--- end source: %s/%s ---" % (suite.name, test.path)
+ print "--- begin source: %s ---" % test
+ if test.is_source_available():
+ print test.get_source()
+ else:
+ print '(no source available)'
+ print "--- end source: %s ---" % test
def FormatTime(d):
@@ -84,16 +89,16 @@
return time.strftime("%M:%S.", time.gmtime(d)) + ("%03i" % millis)
-def PrintTestDurations(suites, overall_time):
+def PrintTestDurations(suites, outputs, overall_time):
# Write the times to stderr to make it easy to separate from the
# test output.
print
sys.stderr.write("--- Total time: %s ---\n" % FormatTime(overall_time))
- timed_tests = [ t for s in suites for t in s.tests
- if t.duration is not None ]
- timed_tests.sort(lambda a, b: cmp(b.duration, a.duration))
+ timed_tests = [(t, outputs[t].duration) for s in suites for t in s.tests
+ if t in outputs]
+ timed_tests.sort(key=lambda (_, duration): duration, reverse=True)
index = 1
- for entry in timed_tests[:20]:
- t = FormatTime(entry.duration)
- sys.stderr.write("%4i (%s) %s\n" % (index, t, entry.GetLabel()))
+ for test, duration in timed_tests[:20]:
+ t = FormatTime(duration)
+ sys.stderr.write("%4i (%s) %s\n" % (index, t, test))
index += 1
diff --git a/src/v8/tools/testrunner/network/__init__.py b/src/v8/tools/testrunner/network/__init__.py
deleted file mode 100644
index 202a262..0000000
--- a/src/v8/tools/testrunner/network/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/src/v8/tools/testrunner/network/distro.py b/src/v8/tools/testrunner/network/distro.py
deleted file mode 100644
index 9d5a471..0000000
--- a/src/v8/tools/testrunner/network/distro.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-class Shell(object):
- def __init__(self, shell):
- self.shell = shell
- self.tests = []
- self.total_duration = 0.0
-
- def AddSuite(self, suite):
- self.tests += suite.tests
- self.total_duration += suite.total_duration
-
- def SortTests(self):
- self.tests.sort(cmp=lambda x, y: cmp(x.duration, y.duration))
-
-
-def Assign(suites, peers):
- total_work = 0.0
- for s in suites:
- total_work += s.CalculateTotalDuration()
-
- total_power = 0.0
- for p in peers:
- p.assigned_work = 0.0
- total_power += p.jobs * p.relative_performance
- for p in peers:
- p.needed_work = total_work * p.jobs * p.relative_performance / total_power
-
- shells = {}
- for s in suites:
- shell = s.shell()
- if not shell in shells:
- shells[shell] = Shell(shell)
- shells[shell].AddSuite(s)
- # Convert |shells| to list and sort it, shortest total_duration first.
- shells = [ shells[s] for s in shells ]
- shells.sort(cmp=lambda x, y: cmp(x.total_duration, y.total_duration))
- # Sort tests within each shell, longest duration last (so it's
- # pop()'ed first).
- for s in shells: s.SortTests()
- # Sort peers, least needed_work first.
- peers.sort(cmp=lambda x, y: cmp(x.needed_work, y.needed_work))
- index = 0
- for shell in shells:
- while len(shell.tests) > 0:
- while peers[index].needed_work <= 0:
- index += 1
- if index == len(peers):
- print("BIG FAT WARNING: Assigning tests to peers failed. "
- "Remaining tests: %d. Going to slow mode." % len(shell.tests))
- # Pick the least-busy peer. Sorting the list for each test
- # is terribly slow, but this is just an emergency fallback anyway.
- peers.sort(cmp=lambda x, y: cmp(x.needed_work, y.needed_work))
- peers[0].ForceAddOneTest(shell.tests.pop(), shell)
- # If the peer already has a shell assigned and would need this one
- # and then yet another, try to avoid it.
- peer = peers[index]
- if (shell.total_duration < peer.needed_work and
- len(peer.shells) > 0 and
- index < len(peers) - 1 and
- shell.total_duration <= peers[index + 1].needed_work):
- peers[index + 1].AddTests(shell)
- else:
- peer.AddTests(shell)
diff --git a/src/v8/tools/testrunner/network/endpoint.py b/src/v8/tools/testrunner/network/endpoint.py
deleted file mode 100644
index 516578a..0000000
--- a/src/v8/tools/testrunner/network/endpoint.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import multiprocessing
-import os
-import Queue
-import threading
-import time
-
-from ..local import execution
-from ..local import progress
-from ..local import testsuite
-from ..local import utils
-from ..server import compression
-
-
-class EndpointProgress(progress.ProgressIndicator):
- def __init__(self, sock, server, ctx):
- super(EndpointProgress, self).__init__()
- self.sock = sock
- self.server = server
- self.context = ctx
- self.results_queue = [] # Accessors must synchronize themselves.
- self.sender_lock = threading.Lock()
- self.senderthread = threading.Thread(target=self._SenderThread)
- self.senderthread.start()
-
- def HasRun(self, test, has_unexpected_output):
- # The runners that call this have a lock anyway, so this is safe.
- self.results_queue.append(test)
-
- def _SenderThread(self):
- keep_running = True
- tests = []
- self.sender_lock.acquire()
- while keep_running:
- time.sleep(0.1)
- # This should be "atomic enough" without locking :-)
- # (We don't care which list any new elements get appended to, as long
- # as we don't lose any and the last one comes last.)
- current = self.results_queue
- self.results_queue = []
- for c in current:
- if c is None:
- keep_running = False
- else:
- tests.append(c)
- if keep_running and len(tests) < 1:
- continue # Wait for more results.
- if len(tests) < 1: break # We're done here.
- result = []
- for t in tests:
- result.append(t.PackResult())
- try:
- compression.Send(result, self.sock)
- except:
- self.runner.terminate = True
- for t in tests:
- self.server.CompareOwnPerf(t, self.context.arch, self.context.mode)
- tests = []
- self.sender_lock.release()
-
-
-def Execute(workspace, ctx, tests, sock, server):
- suite_paths = utils.GetSuitePaths(os.path.join(workspace, "test"))
- suites = []
- for root in suite_paths:
- suite = testsuite.TestSuite.LoadTestSuite(
- os.path.join(workspace, "test", root))
- if suite:
- suite.SetupWorkingDirectory()
- suites.append(suite)
-
- suites_dict = {}
- for s in suites:
- suites_dict[s.name] = s
- s.tests = []
- for t in tests:
- suite = suites_dict[t.suite]
- t.suite = suite
- suite.tests.append(t)
-
- suites = [ s for s in suites if len(s.tests) > 0 ]
- for s in suites:
- s.DownloadData()
-
- progress_indicator = EndpointProgress(sock, server, ctx)
- runner = execution.Runner(suites, progress_indicator, ctx)
- try:
- runner.Run(server.jobs)
- except IOError, e:
- if e.errno == 2:
- message = ("File not found: %s, maybe you forgot to 'git add' it?" %
- e.filename)
- else:
- message = "%s" % e
- compression.Send([[-1, message]], sock)
- progress_indicator.HasRun(None, None) # Sentinel to signal the end.
- progress_indicator.sender_lock.acquire() # Released when sending is done.
- progress_indicator.sender_lock.release()
diff --git a/src/v8/tools/testrunner/network/network_execution.py b/src/v8/tools/testrunner/network/network_execution.py
deleted file mode 100644
index a954401..0000000
--- a/src/v8/tools/testrunner/network/network_execution.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import os
-import socket
-import subprocess
-import threading
-import time
-
-from . import distro
-from ..local import execution
-from ..local import perfdata
-from ..objects import peer
-from ..objects import workpacket
-from ..server import compression
-from ..server import constants
-from ..server import local_handler
-from ..server import signatures
-
-
-def GetPeers():
- data = local_handler.LocalQuery([constants.REQUEST_PEERS])
- if not data: return []
- return [ peer.Peer.Unpack(p) for p in data ]
-
-
-class NetworkedRunner(execution.Runner):
- def __init__(self, suites, progress_indicator, context, peers, workspace):
- self.suites = suites
- datapath = os.path.join("out", "testrunner_data")
- # TODO(machenbach): These fields should exist now in the superclass.
- # But there is no super constructor call. Check if this is a problem.
- self.perf_data_manager = perfdata.PerfDataManager(datapath)
- self.perfdata = self.perf_data_manager.GetStore(context.arch, context.mode)
- for s in suites:
- for t in s.tests:
- t.duration = self.perfdata.FetchPerfData(t) or 1.0
- self._CommonInit(suites, progress_indicator, context)
- self.tests = [] # Only used if we need to fall back to local execution.
- self.tests_lock = threading.Lock()
- self.peers = peers
- self.pubkey_fingerprint = None # Fetched later.
- self.base_rev = subprocess.check_output(
- "cd %s; git log -1 --format=%%H --grep=git-svn-id" % workspace,
- shell=True).strip()
- self.base_svn_rev = subprocess.check_output(
- "cd %s; git log -1 %s" # Get commit description.
- " | grep -e '^\s*git-svn-id:'" # Extract "git-svn-id" line.
- " | awk '{print $2}'" # Extract "repository@revision" part.
- " | sed -e 's/.*@//'" % # Strip away "repository@".
- (workspace, self.base_rev), shell=True).strip()
- self.patch = subprocess.check_output(
- "cd %s; git diff %s" % (workspace, self.base_rev), shell=True)
- self.binaries = {}
- self.initialization_lock = threading.Lock()
- self.initialization_lock.acquire() # Released when init is done.
- self._OpenLocalConnection()
- self.local_receiver_thread = threading.Thread(
- target=self._ListenLocalConnection)
- self.local_receiver_thread.daemon = True
- self.local_receiver_thread.start()
- self.initialization_lock.acquire()
- self.initialization_lock.release()
-
- def _OpenLocalConnection(self):
- self.local_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- code = self.local_socket.connect_ex(("localhost", constants.CLIENT_PORT))
- if code != 0:
- raise RuntimeError("Failed to connect to local server")
- compression.Send([constants.REQUEST_PUBKEY_FINGERPRINT], self.local_socket)
-
- def _ListenLocalConnection(self):
- release_lock_countdown = 1 # Pubkey.
- self.local_receiver = compression.Receiver(self.local_socket)
- while not self.local_receiver.IsDone():
- data = self.local_receiver.Current()
- if data[0] == constants.REQUEST_PUBKEY_FINGERPRINT:
- pubkey = data[1]
- if not pubkey: raise RuntimeError("Received empty public key")
- self.pubkey_fingerprint = pubkey
- release_lock_countdown -= 1
- if release_lock_countdown == 0:
- self.initialization_lock.release()
- release_lock_countdown -= 1 # Prevent repeated triggering.
- self.local_receiver.Advance()
-
- def Run(self, jobs):
- self.indicator.Starting()
- need_libv8 = False
- for s in self.suites:
- shell = s.shell()
- if shell not in self.binaries:
- path = os.path.join(self.context.shell_dir, shell)
- # Check if this is a shared library build.
- try:
- ldd = subprocess.check_output("ldd %s | grep libv8\\.so" % (path),
- shell=True)
- ldd = ldd.strip().split(" ")
- assert ldd[0] == "libv8.so"
- assert ldd[1] == "=>"
- need_libv8 = True
- binary_needs_libv8 = True
- libv8 = signatures.ReadFileAndSignature(ldd[2])
- except:
- binary_needs_libv8 = False
- binary = signatures.ReadFileAndSignature(path)
- if binary[0] is None:
- print("Error: Failed to create signature.")
- assert binary[1] != 0
- return binary[1]
- binary.append(binary_needs_libv8)
- self.binaries[shell] = binary
- if need_libv8:
- self.binaries["libv8.so"] = libv8
- distro.Assign(self.suites, self.peers)
- # Spawn one thread for each peer.
- threads = []
- for p in self.peers:
- thread = threading.Thread(target=self._TalkToPeer, args=[p])
- threads.append(thread)
- thread.start()
- try:
- for thread in threads:
- # Use a timeout so that signals (Ctrl+C) will be processed.
- thread.join(timeout=10000000)
- self._AnalyzePeerRuntimes()
- except KeyboardInterrupt:
- self.terminate = True
- raise
- except Exception, _e:
- # If there's an exception we schedule an interruption for any
- # remaining threads...
- self.terminate = True
- # ...and then reraise the exception to bail out.
- raise
- compression.Send(constants.END_OF_STREAM, self.local_socket)
- self.local_socket.close()
- if self.tests:
- self._RunInternal(jobs)
- self.indicator.Done()
- return not self.failed
-
- def _TalkToPeer(self, peer):
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- sock.settimeout(self.context.timeout + 10)
- code = sock.connect_ex((peer.address, constants.PEER_PORT))
- if code == 0:
- try:
- peer.runtime = None
- start_time = time.time()
- packet = workpacket.WorkPacket(peer=peer, context=self.context,
- base_revision=self.base_svn_rev,
- patch=self.patch,
- pubkey=self.pubkey_fingerprint)
- data, test_map = packet.Pack(self.binaries)
- compression.Send(data, sock)
- compression.Send(constants.END_OF_STREAM, sock)
- rec = compression.Receiver(sock)
- while not rec.IsDone() and not self.terminate:
- data_list = rec.Current()
- for data in data_list:
- test_id = data[0]
- if test_id < 0:
- # The peer is reporting an error.
- with self.lock:
- print("\nPeer %s reports error: %s" % (peer.address, data[1]))
- continue
- test = test_map.pop(test_id)
- test.MergeResult(data)
- try:
- self.perfdata.UpdatePerfData(test)
- except Exception, e:
- print("UpdatePerfData exception: %s" % e)
- pass # Just keep working.
- with self.lock:
- perf_key = self.perfdata.GetKey(test)
- compression.Send(
- [constants.INFORM_DURATION, perf_key, test.duration,
- self.context.arch, self.context.mode],
- self.local_socket)
- has_unexpected_output = test.suite.HasUnexpectedOutput(test)
- if has_unexpected_output:
- self.failed.append(test)
- if test.output.HasCrashed():
- self.crashed += 1
- else:
- self.succeeded += 1
- self.remaining -= 1
- self.indicator.HasRun(test, has_unexpected_output)
- rec.Advance()
- peer.runtime = time.time() - start_time
- except KeyboardInterrupt:
- sock.close()
- raise
- except Exception, e:
- print("Got exception: %s" % e)
- pass # Fall back to local execution.
- else:
- compression.Send([constants.UNRESPONSIVE_PEER, peer.address],
- self.local_socket)
- sock.close()
- if len(test_map) > 0:
- # Some tests have not received any results. Run them locally.
- print("\nNo results for %d tests, running them locally." % len(test_map))
- self._EnqueueLocally(test_map)
-
- def _EnqueueLocally(self, test_map):
- with self.tests_lock:
- for test in test_map:
- self.tests.append(test_map[test])
-
- def _AnalyzePeerRuntimes(self):
- total_runtime = 0.0
- total_work = 0.0
- for p in self.peers:
- if p.runtime is None:
- return
- total_runtime += p.runtime
- total_work += p.assigned_work
- for p in self.peers:
- p.assigned_work /= total_work
- p.runtime /= total_runtime
- perf_correction = p.assigned_work / p.runtime
- old_perf = p.relative_performance
- p.relative_performance = (old_perf + perf_correction) / 2.0
- compression.Send([constants.UPDATE_PERF, p.address,
- p.relative_performance],
- self.local_socket)
diff --git a/src/v8/tools/testrunner/objects/context.py b/src/v8/tools/testrunner/objects/context.py
index 6bcbfb6..a3dd56d 100644
--- a/src/v8/tools/testrunner/objects/context.py
+++ b/src/v8/tools/testrunner/objects/context.py
@@ -29,8 +29,8 @@
class Context():
def __init__(self, arch, mode, shell_dir, mode_flags, verbose, timeout,
isolates, command_prefix, extra_flags, noi18n, random_seed,
- no_sorting, rerun_failures_count, rerun_failures_max,
- predictable, no_harness, use_perf_data, sancov_dir):
+ no_sorting, rerun_failures_count, rerun_failures_max, no_harness,
+ use_perf_data, sancov_dir, infra_staging=False):
self.arch = arch
self.mode = mode
self.shell_dir = shell_dir
@@ -45,22 +45,7 @@
self.no_sorting = no_sorting
self.rerun_failures_count = rerun_failures_count
self.rerun_failures_max = rerun_failures_max
- self.predictable = predictable
self.no_harness = no_harness
self.use_perf_data = use_perf_data
self.sancov_dir = sancov_dir
-
- def Pack(self):
- return [self.arch, self.mode, self.mode_flags, self.timeout, self.isolates,
- self.command_prefix, self.extra_flags, self.noi18n,
- self.random_seed, self.no_sorting, self.rerun_failures_count,
- self.rerun_failures_max, self.predictable, self.no_harness,
- self.use_perf_data, self.sancov_dir]
-
- @staticmethod
- def Unpack(packed):
- # For the order of the fields, refer to Pack() above.
- return Context(packed[0], packed[1], None, packed[2], False,
- packed[3], packed[4], packed[5], packed[6], packed[7],
- packed[8], packed[9], packed[10], packed[11], packed[12],
- packed[13], packed[14], packed[15])
+ self.infra_staging = infra_staging
diff --git a/src/v8/tools/testrunner/objects/output.py b/src/v8/tools/testrunner/objects/output.py
index b4bb01f..adc33c9 100644
--- a/src/v8/tools/testrunner/objects/output.py
+++ b/src/v8/tools/testrunner/objects/output.py
@@ -32,12 +32,13 @@
class Output(object):
- def __init__(self, exit_code, timed_out, stdout, stderr, pid):
+ def __init__(self, exit_code, timed_out, stdout, stderr, pid, duration):
self.exit_code = exit_code
self.timed_out = timed_out
self.stdout = stdout
self.stderr = stderr
self.pid = pid
+ self.duration = duration
def HasCrashed(self):
if utils.IsWindows():
@@ -51,11 +52,3 @@
def HasTimedOut(self):
return self.timed_out
-
- def Pack(self):
- return [self.exit_code, self.timed_out, self.stdout, self.stderr, self.pid]
-
- @staticmethod
- def Unpack(packed):
- # For the order of the fields, refer to Pack() above.
- return Output(packed[0], packed[1], packed[2], packed[3], packed[4])
diff --git a/src/v8/tools/testrunner/objects/peer.py b/src/v8/tools/testrunner/objects/peer.py
deleted file mode 100644
index 18a6bec..0000000
--- a/src/v8/tools/testrunner/objects/peer.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-class Peer(object):
- def __init__(self, address, jobs, rel_perf, pubkey):
- self.address = address # string: IP address
- self.jobs = jobs # integer: number of CPUs
- self.relative_performance = rel_perf
- self.pubkey = pubkey # string: pubkey's fingerprint
- self.shells = set() # set of strings
- self.needed_work = 0
- self.assigned_work = 0
- self.tests = [] # list of TestCase objects
- self.trusting_me = False # This peer trusts my public key.
- self.trusted = False # I trust this peer's public key.
-
- def __str__(self):
- return ("Peer at %s, jobs: %d, performance: %.2f, trust I/O: %s/%s" %
- (self.address, self.jobs, self.relative_performance,
- self.trusting_me, self.trusted))
-
- def AddTests(self, shell):
- """Adds tests from |shell| to this peer.
-
- Stops when self.needed_work reaches zero, or when all of shell's tests
- are assigned."""
- assert self.needed_work > 0
- if shell.shell not in self.shells:
- self.shells.add(shell.shell)
- while len(shell.tests) > 0 and self.needed_work > 0:
- t = shell.tests.pop()
- self.needed_work -= t.duration
- self.assigned_work += t.duration
- shell.total_duration -= t.duration
- self.tests.append(t)
-
- def ForceAddOneTest(self, test, shell):
- """Forcibly adds another test to this peer, disregarding needed_work."""
- if shell.shell not in self.shells:
- self.shells.add(shell.shell)
- self.needed_work -= test.duration
- self.assigned_work += test.duration
- shell.total_duration -= test.duration
- self.tests.append(test)
-
-
- def Pack(self):
- """Creates a JSON serializable representation of this Peer."""
- return [self.address, self.jobs, self.relative_performance]
-
- @staticmethod
- def Unpack(packed):
- """Creates a Peer object built from a packed representation."""
- pubkey_dummy = "" # Callers of this don't care (only the server does).
- return Peer(packed[0], packed[1], packed[2], pubkey_dummy)
diff --git a/src/v8/tools/testrunner/objects/predictable.py b/src/v8/tools/testrunner/objects/predictable.py
new file mode 100644
index 0000000..ad93077
--- /dev/null
+++ b/src/v8/tools/testrunner/objects/predictable.py
@@ -0,0 +1,57 @@
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from ..local import statusfile
+from ..outproc import base as outproc_base
+from ..testproc.result import Result
+
+
+# Only check the exit code of the predictable_wrapper in
+# verify-predictable mode. Negative tests are not supported as they
+# usually also don't print allocation hashes. There are two versions of
+# negative tests: one specified by the test, the other specified through
+# the status file (e.g. known bugs).
+
+
+def get_outproc(test):
+ output_proc = test.output_proc
+ if output_proc.negative or statusfile.FAIL in test.expected_outcomes:
+ # TODO(majeski): Skip these tests instead of having special outproc.
+ return NeverUnexpectedOutputOutProc(output_proc)
+ return OutProc(output_proc)
+
+
+class OutProc(outproc_base.BaseOutProc):
+ """Output processor wrapper for predictable mode. It has custom process and
+ has_unexpected_output implementation, but for all other methods it simply
+ calls wrapped output processor.
+ """
+ def __init__(self, _outproc):
+ super(OutProc, self).__init__()
+ self._outproc = _outproc
+
+ def process(self, output):
+ return Result(self.has_unexpected_output(output), output)
+
+ def has_unexpected_output(self, output):
+ return output.exit_code != 0
+
+ def get_outcome(self, output):
+ return self._outproc.get_outcome(output)
+
+ @property
+ def negative(self):
+ return self._outproc.negative
+
+ @property
+ def expected_outcomes(self):
+ return self._outproc.expected_outcomes
+
+
+class NeverUnexpectedOutputOutProc(OutProc):
+ """Output processor wrapper for tests that we will return False for
+ has_unexpected_output in the predictable mode.
+ """
+ def has_unexpected_output(self, output):
+ return False
diff --git a/src/v8/tools/testrunner/objects/testcase.py b/src/v8/tools/testrunner/objects/testcase.py
index 37e3cb4..06db328 100644
--- a/src/v8/tools/testrunner/objects/testcase.py
+++ b/src/v8/tools/testrunner/objects/testcase.py
@@ -25,92 +25,274 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+import copy
+import os
+import re
+import shlex
-from . import output
+from ..outproc import base as outproc
+from ..local import command
+from ..local import statusfile
+from ..local import utils
+
+FLAGS_PATTERN = re.compile(r"//\s+Flags:(.*)")
+
+
class TestCase(object):
- def __init__(self, suite, path, variant=None, flags=None,
- override_shell=None):
+ def __init__(self, suite, path, name):
self.suite = suite # TestSuite object
+
self.path = path # string, e.g. 'div-mod', 'test-api/foo'
- self.flags = flags or [] # list of strings, flags specific to this test
- self.variant = variant # name of the used testing variant
- self.override_shell = override_shell
- self.outcomes = frozenset([])
- self.output = None
+ self.name = name # string that identifies test in the status file
+
+ self.variant = None # name of the used testing variant
+ self.variant_flags = [] # list of strings, flags specific to this test
+
self.id = None # int, used to map result back to TestCase instance
- self.duration = None # assigned during execution
self.run = 1 # The nth time this test is executed.
- self.env = {}
+ self.cmd = None
- def CopyAddingFlags(self, variant, flags):
- copy = TestCase(self.suite, self.path, variant, self.flags + flags,
- self.override_shell)
- copy.outcomes = self.outcomes
- copy.env = self.env
- return copy
+ # Fields used by the test processors.
+ self.origin = None # Test that this test is subtest of.
+ self.processor = None # Processor that created this subtest.
+ self.procid = '%s/%s' % (self.suite.name, self.name) # unique id
+ self.keep_output = False # Can output of this test be dropped
- def PackTask(self):
+ self._statusfile_outcomes = None
+ self._expected_outcomes = None # optimization: None == [statusfile.PASS]
+ self._statusfile_flags = None
+ self._prepare_outcomes()
+
+ def create_subtest(self, processor, subtest_id, variant=None, flags=None,
+ keep_output=False):
+ subtest = copy.copy(self)
+ subtest.origin = self
+ subtest.processor = processor
+ subtest.procid += '.%s' % subtest_id
+ subtest.keep_output = keep_output
+ if variant is not None:
+ assert self.variant is None
+ subtest.variant = variant
+ subtest.variant_flags = flags
+ subtest._prepare_outcomes()
+ return subtest
+
+ def create_variant(self, variant, flags, procid_suffix=None):
+ """Makes a shallow copy of the object and updates variant, variant flags and
+ all fields that depend on it, e.g. expected outcomes.
+
+ Args
+ variant - variant name
+ flags - flags that should be added to origin test's variant flags
+ procid_suffix - for multiple variants with the same name set suffix to
+ keep procid unique.
"""
- Extracts those parts of this object that are required to run the test
- and returns them as a JSON serializable object.
+ other = copy.copy(self)
+ if not self.variant_flags:
+ other.variant_flags = flags
+ else:
+ other.variant_flags = self.variant_flags + flags
+ other.variant = variant
+ if procid_suffix:
+ other.procid += '[%s-%s]' % (variant, procid_suffix)
+ else:
+ other.procid += '[%s]' % variant
+
+ other._prepare_outcomes(variant != self.variant)
+
+ return other
+
+ def _prepare_outcomes(self, force_update=True):
+ if force_update or self._statusfile_outcomes is None:
+ def is_flag(outcome):
+ return outcome.startswith('--')
+ def not_flag(outcome):
+ return not is_flag(outcome)
+
+ outcomes = self.suite.statusfile.get_outcomes(self.name, self.variant)
+ self._statusfile_outcomes = filter(not_flag, outcomes)
+ self._statusfile_flags = filter(is_flag, outcomes)
+ self.expected_outcomes = (
+ self._parse_status_file_outcomes(self._statusfile_outcomes))
+
+ def _parse_status_file_outcomes(self, outcomes):
+ if (statusfile.FAIL_SLOPPY in outcomes and
+ '--use-strict' not in self.variant_flags):
+ return outproc.OUTCOMES_FAIL
+
+ expected_outcomes = []
+ if (statusfile.FAIL in outcomes or
+ statusfile.FAIL_OK in outcomes):
+ expected_outcomes.append(statusfile.FAIL)
+ if statusfile.CRASH in outcomes:
+ expected_outcomes.append(statusfile.CRASH)
+
+ # Do not add PASS if there is nothing else. Empty outcomes are converted to
+ # the global [PASS].
+ if expected_outcomes and statusfile.PASS in outcomes:
+ expected_outcomes.append(statusfile.PASS)
+
+ # Avoid creating multiple instances of a list with a single FAIL.
+ if expected_outcomes == outproc.OUTCOMES_FAIL:
+ return outproc.OUTCOMES_FAIL
+ return expected_outcomes or outproc.OUTCOMES_PASS
+
+ @property
+ def do_skip(self):
+ return statusfile.SKIP in self._statusfile_outcomes
+
+ @property
+ def is_slow(self):
+ return statusfile.SLOW in self._statusfile_outcomes
+
+ @property
+ def is_fail_ok(self):
+ return statusfile.FAIL_OK in self._statusfile_outcomes
+
+ @property
+ def is_pass_or_fail(self):
+ return (statusfile.PASS in self._statusfile_outcomes and
+ statusfile.FAIL in self._statusfile_outcomes and
+ statusfile.CRASH not in self._statusfile_outcomes)
+
+ @property
+ def only_standard_variant(self):
+ return statusfile.NO_VARIANTS in self._statusfile_outcomes
+
+ def get_command(self, context):
+ params = self._get_cmd_params(context)
+ env = self._get_cmd_env()
+ shell, shell_flags = self._get_shell_with_flags(context)
+ timeout = self._get_timeout(params, context.timeout)
+ return self._create_cmd(shell, shell_flags + params, env, timeout, context)
+
+ def _get_cmd_params(self, ctx):
+ """Gets command parameters and combines them in the following order:
+ - files [empty by default]
+ - extra flags (from command line)
+ - user flags (variant/fuzzer flags)
+ - statusfile flags
+ - mode flags (based on chosen mode)
+ - source flags (from source code) [empty by default]
+
+ The best way to modify how parameters are created is to only override
+ methods for getting partial parameters.
"""
- assert self.id is not None
- return [self.suitename(), self.path, self.variant, self.flags,
- self.override_shell, list(self.outcomes or []),
- self.id, self.env]
+ return (
+ self._get_files_params(ctx) +
+ self._get_extra_flags(ctx) +
+ self._get_variant_flags() +
+ self._get_statusfile_flags() +
+ self._get_mode_flags(ctx) +
+ self._get_source_flags() +
+ self._get_suite_flags(ctx)
+ )
- @staticmethod
- def UnpackTask(task):
- """Creates a new TestCase object based on packed task data."""
- # For the order of the fields, refer to PackTask() above.
- test = TestCase(str(task[0]), task[1], task[2], task[3], task[4])
- test.outcomes = frozenset(task[5])
- test.id = task[6]
- test.run = 1
- test.env = task[7]
- return test
+ def _get_cmd_env(self):
+ return {}
- def SetSuiteObject(self, suites):
- self.suite = suites[self.suite]
+ def _get_files_params(self, ctx):
+ return []
- def PackResult(self):
- """Serializes the output of the TestCase after it has run."""
- self.suite.StripOutputForTransmit(self)
- return [self.id, self.output.Pack(), self.duration]
+ def _get_extra_flags(self, ctx):
+ return ctx.extra_flags
- def MergeResult(self, result):
- """Applies the contents of a Result to this object."""
- assert result[0] == self.id
- self.output = output.Output.Unpack(result[1])
- self.duration = result[2]
+ def _get_variant_flags(self):
+ return self.variant_flags
- def suitename(self):
- return self.suite.name
+ def _get_statusfile_flags(self):
+ """Gets runtime flags from a status file.
- def GetLabel(self):
- return self.suitename() + "/" + self.suite.CommonTestName(self)
-
- def shell(self):
- if self.override_shell:
- return self.override_shell
- return self.suite.shell()
-
- def __getstate__(self):
- """Representation to pickle test cases.
-
- The original suite won't be sent beyond process boundaries. Instead
- send the name only and retrieve a process-local suite later.
+ Every outcome that starts with "--" is a flag.
"""
- return dict(self.__dict__, suite=self.suite.name)
+ return self._statusfile_flags
+
+ def _get_mode_flags(self, ctx):
+ return ctx.mode_flags
+
+ def _get_source_flags(self):
+ return []
+
+ def _get_suite_flags(self, ctx):
+ return []
+
+ def _get_shell_with_flags(self, ctx):
+ shell = self.get_shell()
+ shell_flags = []
+ if shell == 'd8':
+ shell_flags.append('--test')
+ if utils.IsWindows():
+ shell += '.exe'
+ if ctx.random_seed:
+ shell_flags.append('--random-seed=%s' % ctx.random_seed)
+ return shell, shell_flags
+
+ def _get_timeout(self, params, timeout):
+ if "--stress-opt" in params:
+ timeout *= 4
+ if "--noenable-vfp3" in params:
+ timeout *= 2
+
+ # TODO(majeski): make it slow outcome dependent.
+ timeout *= 2
+ return timeout
+
+ def get_shell(self):
+ return 'd8'
+
+ def _get_suffix(self):
+ return '.js'
+
+ def _create_cmd(self, shell, params, env, timeout, ctx):
+ return command.Command(
+ cmd_prefix=ctx.command_prefix,
+ shell=os.path.abspath(os.path.join(ctx.shell_dir, shell)),
+ args=params,
+ env=env,
+ timeout=timeout,
+ verbose=ctx.verbose
+ )
+
+ def _parse_source_flags(self, source=None):
+ source = source or self.get_source()
+ flags = []
+ for match in re.findall(FLAGS_PATTERN, source):
+ flags += shlex.split(match.strip())
+ return flags
+
+ def is_source_available(self):
+ return self._get_source_path() is not None
+
+ def get_source(self):
+ with open(self._get_source_path()) as f:
+ return f.read()
+
+ def _get_source_path(self):
+ return None
+
+ @property
+ def output_proc(self):
+ if self.expected_outcomes is outproc.OUTCOMES_PASS:
+ return outproc.DEFAULT
+ return outproc.OutProc(self.expected_outcomes)
def __cmp__(self, other):
# Make sure that test cases are sorted correctly if sorted without
# key function. But using a key function is preferred for speed.
return cmp(
- (self.suite.name, self.path, self.flags),
- (other.suite.name, other.path, other.flags),
+ (self.suite.name, self.name, self.variant_flags),
+ (other.suite.name, other.name, other.variant_flags)
)
+ def __hash__(self):
+ return hash((self.suite.name, self.name, ''.join(self.variant_flags)))
+
def __str__(self):
- return "[%s/%s %s]" % (self.suite.name, self.path, self.flags)
+ return self.suite.name + '/' + self.name
+
+ # TODO(majeski): Rename `id` field or `get_id` function since they're
+ # unrelated.
+ def get_id(self):
+ return '%s/%s %s' % (
+ self.suite.name, self.name, ' '.join(self.variant_flags))
diff --git a/src/v8/tools/testrunner/objects/workpacket.py b/src/v8/tools/testrunner/objects/workpacket.py
deleted file mode 100644
index d07efe7..0000000
--- a/src/v8/tools/testrunner/objects/workpacket.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-from . import context
-from . import testcase
-
-class WorkPacket(object):
- def __init__(self, peer=None, context=None, tests=None, binaries=None,
- base_revision=None, patch=None, pubkey=None):
- self.peer = peer
- self.context = context
- self.tests = tests
- self.binaries = binaries
- self.base_revision = base_revision
- self.patch = patch
- self.pubkey_fingerprint = pubkey
-
- def Pack(self, binaries_dict):
- """
- Creates a JSON serializable object containing the data of this
- work packet.
- """
- need_libv8 = False
- binaries = []
- for shell in self.peer.shells:
- prefetched_binary = binaries_dict[shell]
- binaries.append({"name": shell,
- "blob": prefetched_binary[0],
- "sign": prefetched_binary[1]})
- if prefetched_binary[2]:
- need_libv8 = True
- if need_libv8:
- libv8 = binaries_dict["libv8.so"]
- binaries.append({"name": "libv8.so",
- "blob": libv8[0],
- "sign": libv8[1]})
- tests = []
- test_map = {}
- for t in self.peer.tests:
- test_map[t.id] = t
- tests.append(t.PackTask())
- result = {
- "binaries": binaries,
- "pubkey": self.pubkey_fingerprint,
- "context": self.context.Pack(),
- "base_revision": self.base_revision,
- "patch": self.patch,
- "tests": tests
- }
- return result, test_map
-
- @staticmethod
- def Unpack(packed):
- """
- Creates a WorkPacket object from the given packed representation.
- """
- binaries = packed["binaries"]
- pubkey_fingerprint = packed["pubkey"]
- ctx = context.Context.Unpack(packed["context"])
- base_revision = packed["base_revision"]
- patch = packed["patch"]
- tests = [ testcase.TestCase.UnpackTask(t) for t in packed["tests"] ]
- return WorkPacket(context=ctx, tests=tests, binaries=binaries,
- base_revision=base_revision, patch=patch,
- pubkey=pubkey_fingerprint)
diff --git a/src/v8/tools/testrunner/outproc/__init__.py b/src/v8/tools/testrunner/outproc/__init__.py
new file mode 100644
index 0000000..4433538
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/__init__.py
@@ -0,0 +1,3 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
diff --git a/src/v8/tools/testrunner/outproc/base.py b/src/v8/tools/testrunner/outproc/base.py
new file mode 100644
index 0000000..9a9db4e
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/base.py
@@ -0,0 +1,166 @@
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import collections
+import itertools
+
+from ..local import statusfile
+from ..testproc.result import Result
+
+
+OUTCOMES_PASS = [statusfile.PASS]
+OUTCOMES_FAIL = [statusfile.FAIL]
+
+
+class BaseOutProc(object):
+ def process(self, output):
+ return Result(self.has_unexpected_output(output), output)
+
+ def has_unexpected_output(self, output):
+ return self.get_outcome(output) not in self.expected_outcomes
+
+ def get_outcome(self, output):
+ if output.HasCrashed():
+ return statusfile.CRASH
+ elif output.HasTimedOut():
+ return statusfile.TIMEOUT
+ elif self._has_failed(output):
+ return statusfile.FAIL
+ else:
+ return statusfile.PASS
+
+ def _has_failed(self, output):
+ execution_failed = self._is_failure_output(output)
+ if self.negative:
+ return not execution_failed
+ return execution_failed
+
+ def _is_failure_output(self, output):
+ return output.exit_code != 0
+
+ @property
+ def negative(self):
+ return False
+
+ @property
+ def expected_outcomes(self):
+ raise NotImplementedError()
+
+
+class Negative(object):
+ @property
+ def negative(self):
+ return True
+
+
+class PassOutProc(BaseOutProc):
+ """Output processor optimized for positive tests expected to PASS."""
+ def has_unexpected_output(self, output):
+ return self.get_outcome(output) != statusfile.PASS
+
+ @property
+ def expected_outcomes(self):
+ return OUTCOMES_PASS
+
+
+class OutProc(BaseOutProc):
+ """Output processor optimized for positive tests with expected outcomes
+ different than a single PASS.
+ """
+ def __init__(self, expected_outcomes):
+ self._expected_outcomes = expected_outcomes
+
+ @property
+ def expected_outcomes(self):
+ return self._expected_outcomes
+
+ # TODO(majeski): Inherit from PassOutProc in case of OUTCOMES_PASS and remove
+ # custom get/set state.
+ def __getstate__(self):
+ d = self.__dict__
+ if self._expected_outcomes is OUTCOMES_PASS:
+ d = d.copy()
+ del d['_expected_outcomes']
+ return d
+
+ def __setstate__(self, d):
+ if '_expected_outcomes' not in d:
+ d['_expected_outcomes'] = OUTCOMES_PASS
+ self.__dict__.update(d)
+
+
+# TODO(majeski): Override __reduce__ to make it deserialize as one instance.
+DEFAULT = PassOutProc()
+
+
+class ExpectedOutProc(OutProc):
+ """Output processor that has is_failure_output depending on comparing the
+ output with the expected output.
+ """
+ def __init__(self, expected_outcomes, expected_filename):
+ super(ExpectedOutProc, self).__init__(expected_outcomes)
+ self._expected_filename = expected_filename
+
+ def _is_failure_output(self, output):
+ with open(self._expected_filename, 'r') as f:
+ expected_lines = f.readlines()
+
+ for act_iterator in self._act_block_iterator(output):
+ for expected, actual in itertools.izip_longest(
+ self._expected_iterator(expected_lines),
+ act_iterator,
+ fillvalue=''
+ ):
+ if expected != actual:
+ return True
+ return False
+
+ def _act_block_iterator(self, output):
+ """Iterates over blocks of actual output lines."""
+ lines = output.stdout.splitlines()
+ start_index = 0
+ found_eqeq = False
+ for index, line in enumerate(lines):
+ # If a stress test separator is found:
+ if line.startswith('=='):
+ # Iterate over all lines before a separator except the first.
+ if not found_eqeq:
+ found_eqeq = True
+ else:
+ yield self._actual_iterator(lines[start_index:index])
+ # The next block of output lines starts after the separator.
+ start_index = index + 1
+ # Iterate over complete output if no separator was found.
+ if not found_eqeq:
+ yield self._actual_iterator(lines)
+
+ def _actual_iterator(self, lines):
+ return self._iterator(lines, self._ignore_actual_line)
+
+ def _expected_iterator(self, lines):
+ return self._iterator(lines, self._ignore_expected_line)
+
+ def _ignore_actual_line(self, line):
+ """Ignore empty lines, valgrind output, Android output and trace
+ incremental marking output.
+ """
+ if not line:
+ return True
+ return (line.startswith('==') or
+ line.startswith('**') or
+ line.startswith('ANDROID') or
+ line.startswith('###') or
+ # FIXME(machenbach): The test driver shouldn't try to use slow
+ # asserts if they weren't compiled. This fails in optdebug=2.
+ line == 'Warning: unknown flag --enable-slow-asserts.' or
+ line == 'Try --help for options')
+
+ def _ignore_expected_line(self, line):
+ return not line
+
+ def _iterator(self, lines, ignore_predicate):
+ for line in lines:
+ line = line.strip()
+ if not ignore_predicate(line):
+ yield line
diff --git a/src/v8/tools/testrunner/outproc/message.py b/src/v8/tools/testrunner/outproc/message.py
new file mode 100644
index 0000000..bbfc1cd
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/message.py
@@ -0,0 +1,56 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import itertools
+import os
+import re
+
+from . import base
+
+
+class OutProc(base.OutProc):
+ def __init__(self, expected_outcomes, basepath, expected_fail):
+ super(OutProc, self).__init__(expected_outcomes)
+ self._basepath = basepath
+ self._expected_fail = expected_fail
+
+ def _is_failure_output(self, output):
+ fail = output.exit_code != 0
+ if fail != self._expected_fail:
+ return True
+
+ expected_lines = []
+ # Can't use utils.ReadLinesFrom() here because it strips whitespace.
+ with open(self._basepath + '.out') as f:
+ for line in f:
+ if line.startswith("#") or not line.strip():
+ continue
+ expected_lines.append(line)
+ raw_lines = output.stdout.splitlines()
+ actual_lines = [ s for s in raw_lines if not self._ignore_line(s) ]
+ if len(expected_lines) != len(actual_lines):
+ return True
+
+ env = {
+ 'basename': os.path.basename(self._basepath + '.js'),
+ }
+ for (expected, actual) in itertools.izip_longest(
+ expected_lines, actual_lines, fillvalue=''):
+ pattern = re.escape(expected.rstrip() % env)
+ pattern = pattern.replace('\\*', '.*')
+ pattern = pattern.replace('\\{NUMBER\\}', '\d+(?:\.\d*)?')
+ pattern = '^%s$' % pattern
+ if not re.match(pattern, actual):
+ return True
+ return False
+
+ def _ignore_line(self, string):
+ """Ignore empty lines, valgrind output, Android output."""
+ return (
+ not string or
+ not string.strip() or
+ string.startswith("==") or
+ string.startswith("**") or
+ string.startswith("ANDROID")
+ )
diff --git a/src/v8/tools/testrunner/outproc/mkgrokdump.py b/src/v8/tools/testrunner/outproc/mkgrokdump.py
new file mode 100644
index 0000000..8efde12
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/mkgrokdump.py
@@ -0,0 +1,31 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import difflib
+
+from . import base
+
+
+class OutProc(base.OutProc):
+ def __init__(self, expected_outcomes, expected_path):
+ super(OutProc, self).__init__(expected_outcomes)
+ self._expected_path = expected_path
+
+ def _is_failure_output(self, output):
+ with open(self._expected_path) as f:
+ expected = f.read()
+ expected_lines = expected.splitlines()
+ actual_lines = output.stdout.splitlines()
+ diff = difflib.unified_diff(expected_lines, actual_lines, lineterm="",
+ fromfile="expected_path")
+ diffstring = '\n'.join(diff)
+ if diffstring is not "":
+ if "generated from a non-shipping build" in output.stdout:
+ return False
+ if not "generated from a shipping build" in output.stdout:
+ output.stdout = "Unexpected output:\n\n" + output.stdout
+ return True
+ output.stdout = diffstring
+ return True
+ return False
diff --git a/src/v8/tools/testrunner/outproc/mozilla.py b/src/v8/tools/testrunner/outproc/mozilla.py
new file mode 100644
index 0000000..1400d0e
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/mozilla.py
@@ -0,0 +1,33 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from . import base
+
+
+def _is_failure_output(self, output):
+ return (
+ output.exit_code != 0 or
+ 'FAILED!' in output.stdout
+ )
+
+
+class OutProc(base.OutProc):
+ """Optimized for positive tests."""
+OutProc._is_failure_output = _is_failure_output
+
+
+class PassOutProc(base.PassOutProc):
+ """Optimized for positive tests expected to PASS."""
+PassOutProc._is_failure_output = _is_failure_output
+
+
+class NegOutProc(base.Negative, OutProc):
+ pass
+
+class NegPassOutProc(base.Negative, PassOutProc):
+ pass
+
+
+MOZILLA_PASS_DEFAULT = PassOutProc()
+MOZILLA_PASS_NEGATIVE = NegPassOutProc()
diff --git a/src/v8/tools/testrunner/outproc/test262.py b/src/v8/tools/testrunner/outproc/test262.py
new file mode 100644
index 0000000..b5eb554
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/test262.py
@@ -0,0 +1,54 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import re
+
+from . import base
+
+
+class ExceptionOutProc(base.OutProc):
+ """Output processor for tests with expected exception."""
+ def __init__(self, expected_outcomes, expected_exception=None):
+ super(ExceptionOutProc, self).__init__(expected_outcomes)
+ self._expected_exception = expected_exception
+
+ def _is_failure_output(self, output):
+ if output.exit_code != 0:
+ return True
+ if self._expected_exception != self._parse_exception(output.stdout):
+ return True
+ return 'FAILED!' in output.stdout
+
+ def _parse_exception(self, string):
+ # somefile:somelinenumber: someerror[: sometext]
+ # somefile might include an optional drive letter on windows e.g. "e:".
+ match = re.search(
+ '^(?:\w:)?[^:]*:[0-9]+: ([^: ]+?)($|: )', string, re.MULTILINE)
+ if match:
+ return match.group(1).strip()
+ else:
+ return None
+
+
+def _is_failure_output(self, output):
+ return (
+ output.exit_code != 0 or
+ 'FAILED!' in output.stdout
+ )
+
+
+class NoExceptionOutProc(base.OutProc):
+ """Output processor optimized for tests without expected exception."""
+NoExceptionOutProc._is_failure_output = _is_failure_output
+
+
+class PassNoExceptionOutProc(base.PassOutProc):
+ """
+ Output processor optimized for tests expected to PASS without expected
+ exception.
+ """
+PassNoExceptionOutProc._is_failure_output = _is_failure_output
+
+
+PASS_NO_EXCEPTION = PassNoExceptionOutProc()
diff --git a/src/v8/tools/testrunner/outproc/webkit.py b/src/v8/tools/testrunner/outproc/webkit.py
new file mode 100644
index 0000000..290e67d
--- /dev/null
+++ b/src/v8/tools/testrunner/outproc/webkit.py
@@ -0,0 +1,18 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from . import base
+
+
+class OutProc(base.ExpectedOutProc):
+ def _is_failure_output(self, output):
+ if output.exit_code != 0:
+ return True
+ return super(OutProc, self)._is_failure_output(output)
+
+ def _ignore_expected_line(self, line):
+ return (
+ line.startswith('#') or
+ super(OutProc, self)._ignore_expected_line(line)
+ )
diff --git a/src/v8/tools/testrunner/server/__init__.py b/src/v8/tools/testrunner/server/__init__.py
deleted file mode 100644
index 202a262..0000000
--- a/src/v8/tools/testrunner/server/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/src/v8/tools/testrunner/server/compression.py b/src/v8/tools/testrunner/server/compression.py
deleted file mode 100644
index d5ed415..0000000
--- a/src/v8/tools/testrunner/server/compression.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import cStringIO as StringIO
-try:
- import ujson as json
-except ImportError:
- import json
-import os
-import struct
-import zlib
-
-from . import constants
-
-def Send(obj, sock):
- """
- Sends a JSON encodable object over the specified socket (zlib-compressed).
- """
- obj = json.dumps(obj)
- compression_level = 2 # 1 = fastest, 9 = best compression
- compressed = zlib.compress(obj, compression_level)
- payload = struct.pack('>i', len(compressed)) + compressed
- sock.sendall(payload)
-
-
-class Receiver(object):
- def __init__(self, sock):
- self.sock = sock
- self.data = StringIO.StringIO()
- self.datalength = 0
- self._next = self._GetNext()
-
- def IsDone(self):
- return self._next == None
-
- def Current(self):
- return self._next
-
- def Advance(self):
- try:
- self._next = self._GetNext()
- except:
- raise
-
- def _GetNext(self):
- try:
- while self.datalength < constants.SIZE_T:
- try:
- chunk = self.sock.recv(8192)
- except:
- raise
- if not chunk: return None
- self._AppendData(chunk)
- size = self._PopData(constants.SIZE_T)
- size = struct.unpack(">i", size)[0]
- while self.datalength < size:
- try:
- chunk = self.sock.recv(8192)
- except:
- raise
- if not chunk: return None
- self._AppendData(chunk)
- result = self._PopData(size)
- result = zlib.decompress(result)
- result = json.loads(result)
- if result == constants.END_OF_STREAM:
- return None
- return result
- except:
- raise
-
- def _AppendData(self, new):
- self.data.seek(0, os.SEEK_END)
- self.data.write(new)
- self.datalength += len(new)
-
- def _PopData(self, length):
- self.data.seek(0)
- chunk = self.data.read(length)
- remaining = self.data.read()
- self.data.close()
- self.data = StringIO.StringIO()
- self.data.write(remaining)
- assert self.datalength - length == len(remaining)
- self.datalength = len(remaining)
- return chunk
diff --git a/src/v8/tools/testrunner/server/constants.py b/src/v8/tools/testrunner/server/constants.py
deleted file mode 100644
index 5aefcba..0000000
--- a/src/v8/tools/testrunner/server/constants.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-CLIENT_PORT = 9991 # Port for the local client to connect to.
-PEER_PORT = 9992 # Port for peers on the network to connect to.
-PRESENCE_PORT = 9993 # Port for presence daemon.
-STATUS_PORT = 9994 # Port for network requests not related to workpackets.
-
-END_OF_STREAM = "end of dtest stream" # Marker for end of network requests.
-SIZE_T = 4 # Number of bytes used for network request size header.
-
-# Messages understood by the local request handler.
-ADD_TRUSTED = "add trusted"
-INFORM_DURATION = "inform about duration"
-REQUEST_PEERS = "get peers"
-UNRESPONSIVE_PEER = "unresponsive peer"
-REQUEST_PUBKEY_FINGERPRINT = "get pubkey fingerprint"
-REQUEST_STATUS = "get status"
-UPDATE_PERF = "update performance"
-
-# Messages understood by the status request handler.
-LIST_TRUSTED_PUBKEYS = "list trusted pubkeys"
-GET_SIGNED_PUBKEY = "pass on signed pubkey"
-NOTIFY_NEW_TRUSTED = "new trusted peer"
-TRUST_YOU_NOW = "trust you now"
-DO_YOU_TRUST = "do you trust"
diff --git a/src/v8/tools/testrunner/server/daemon.py b/src/v8/tools/testrunner/server/daemon.py
deleted file mode 100644
index baa66fb..0000000
--- a/src/v8/tools/testrunner/server/daemon.py
+++ /dev/null
@@ -1,147 +0,0 @@
-#!/usr/bin/env python
-
-# This code has been written by Sander Marechal and published at:
-# http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
-# where the author has placed it in the public domain (see comment #6 at
-# http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/#c6
-# ).
-# Some minor modifications have been made by the V8 authors. The work remains
-# in the public domain.
-
-import atexit
-import os
-from signal import SIGTERM
-from signal import SIGINT
-import sys
-import time
-
-
-class Daemon(object):
- """
- A generic daemon class.
-
- Usage: subclass the Daemon class and override the run() method
- """
- def __init__(self, pidfile, stdin='/dev/null',
- stdout='/dev/null', stderr='/dev/null'):
- self.stdin = stdin
- self.stdout = stdout
- self.stderr = stderr
- self.pidfile = pidfile
-
- def daemonize(self):
- """
- do the UNIX double-fork magic, see Stevens' "Advanced
- Programming in the UNIX Environment" for details (ISBN 0201563177)
- http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
- """
- try:
- pid = os.fork()
- if pid > 0:
- # exit first parent
- sys.exit(0)
- except OSError, e:
- sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
- sys.exit(1)
-
- # decouple from parent environment
- os.chdir("/")
- os.setsid()
- os.umask(0)
-
- # do second fork
- try:
- pid = os.fork()
- if pid > 0:
- # exit from second parent
- sys.exit(0)
- except OSError, e:
- sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
- sys.exit(1)
-
- # redirect standard file descriptors
- sys.stdout.flush()
- sys.stderr.flush()
- si = file(self.stdin, 'r')
- so = file(self.stdout, 'a+')
- se = file(self.stderr, 'a+', 0)
- # TODO: (debug) re-enable this!
- #os.dup2(si.fileno(), sys.stdin.fileno())
- #os.dup2(so.fileno(), sys.stdout.fileno())
- #os.dup2(se.fileno(), sys.stderr.fileno())
-
- # write pidfile
- atexit.register(self.delpid)
- pid = str(os.getpid())
- file(self.pidfile, 'w+').write("%s\n" % pid)
-
- def delpid(self):
- os.remove(self.pidfile)
-
- def start(self):
- """
- Start the daemon
- """
- # Check for a pidfile to see if the daemon already runs
- try:
- pf = file(self.pidfile, 'r')
- pid = int(pf.read().strip())
- pf.close()
- except IOError:
- pid = None
-
- if pid:
- message = "pidfile %s already exist. Daemon already running?\n"
- sys.stderr.write(message % self.pidfile)
- sys.exit(1)
-
- # Start the daemon
- self.daemonize()
- self.run()
-
- def stop(self):
- """
- Stop the daemon
- """
- # Get the pid from the pidfile
- try:
- pf = file(self.pidfile, 'r')
- pid = int(pf.read().strip())
- pf.close()
- except IOError:
- pid = None
-
- if not pid:
- message = "pidfile %s does not exist. Daemon not running?\n"
- sys.stderr.write(message % self.pidfile)
- return # not an error in a restart
-
- # Try killing the daemon process
- try:
- # Give the process a one-second chance to exit gracefully.
- os.kill(pid, SIGINT)
- time.sleep(1)
- while 1:
- os.kill(pid, SIGTERM)
- time.sleep(0.1)
- except OSError, err:
- err = str(err)
- if err.find("No such process") > 0:
- if os.path.exists(self.pidfile):
- os.remove(self.pidfile)
- else:
- print str(err)
- sys.exit(1)
-
- def restart(self):
- """
- Restart the daemon
- """
- self.stop()
- self.start()
-
- def run(self):
- """
- You should override this method when you subclass Daemon. It will be
- called after the process has been daemonized by start() or restart().
- """
diff --git a/src/v8/tools/testrunner/server/local_handler.py b/src/v8/tools/testrunner/server/local_handler.py
deleted file mode 100644
index 3b3ac49..0000000
--- a/src/v8/tools/testrunner/server/local_handler.py
+++ /dev/null
@@ -1,119 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import socket
-import SocketServer
-import StringIO
-
-from . import compression
-from . import constants
-
-
-def LocalQuery(query):
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- code = sock.connect_ex(("localhost", constants.CLIENT_PORT))
- if code != 0: return None
- compression.Send(query, sock)
- compression.Send(constants.END_OF_STREAM, sock)
- rec = compression.Receiver(sock)
- data = None
- while not rec.IsDone():
- data = rec.Current()
- assert data[0] == query[0]
- data = data[1]
- rec.Advance()
- sock.close()
- return data
-
-
-class LocalHandler(SocketServer.BaseRequestHandler):
- def handle(self):
- rec = compression.Receiver(self.request)
- while not rec.IsDone():
- data = rec.Current()
- action = data[0]
-
- if action == constants.REQUEST_PEERS:
- with self.server.daemon.peer_list_lock:
- response = [ p.Pack() for p in self.server.daemon.peers
- if p.trusting_me ]
- compression.Send([action, response], self.request)
-
- elif action == constants.UNRESPONSIVE_PEER:
- self.server.daemon.DeletePeer(data[1])
-
- elif action == constants.REQUEST_PUBKEY_FINGERPRINT:
- compression.Send([action, self.server.daemon.pubkey_fingerprint],
- self.request)
-
- elif action == constants.REQUEST_STATUS:
- compression.Send([action, self._GetStatusMessage()], self.request)
-
- elif action == constants.ADD_TRUSTED:
- fingerprint = self.server.daemon.CopyToTrusted(data[1])
- compression.Send([action, fingerprint], self.request)
-
- elif action == constants.INFORM_DURATION:
- test_key = data[1]
- test_duration = data[2]
- arch = data[3]
- mode = data[4]
- self.server.daemon.AddPerfData(test_key, test_duration, arch, mode)
-
- elif action == constants.UPDATE_PERF:
- address = data[1]
- perf = data[2]
- self.server.daemon.UpdatePeerPerformance(data[1], data[2])
-
- rec.Advance()
- compression.Send(constants.END_OF_STREAM, self.request)
-
- def _GetStatusMessage(self):
- sio = StringIO.StringIO()
- sio.write("Peers:\n")
- with self.server.daemon.peer_list_lock:
- for p in self.server.daemon.peers:
- sio.write("%s\n" % p)
- sio.write("My own jobs: %d, relative performance: %.2f\n" %
- (self.server.daemon.jobs, self.server.daemon.relative_perf))
- # Low-priority TODO: Return more information. Ideas:
- # - currently running anything,
- # - time since last job,
- # - time since last repository fetch
- # - number of workpackets/testcases handled since startup
- # - slowest test(s)
- result = sio.getvalue()
- sio.close()
- return result
-
-
-class LocalSocketServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
- def __init__(self, daemon):
- SocketServer.TCPServer.__init__(self, ("localhost", constants.CLIENT_PORT),
- LocalHandler)
- self.daemon = daemon
diff --git a/src/v8/tools/testrunner/server/main.py b/src/v8/tools/testrunner/server/main.py
deleted file mode 100644
index c237e1a..0000000
--- a/src/v8/tools/testrunner/server/main.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import multiprocessing
-import os
-import shutil
-import subprocess
-import threading
-import time
-
-from . import daemon
-from . import local_handler
-from . import presence_handler
-from . import signatures
-from . import status_handler
-from . import work_handler
-from ..network import perfdata
-
-
-class Server(daemon.Daemon):
-
- def __init__(self, pidfile, root, stdin="/dev/null",
- stdout="/dev/null", stderr="/dev/null"):
- super(Server, self).__init__(pidfile, stdin, stdout, stderr)
- self.root = root
- self.local_handler = None
- self.local_handler_thread = None
- self.work_handler = None
- self.work_handler_thread = None
- self.status_handler = None
- self.status_handler_thread = None
- self.presence_daemon = None
- self.presence_daemon_thread = None
- self.peers = []
- self.jobs = multiprocessing.cpu_count()
- self.peer_list_lock = threading.Lock()
- self.perf_data_lock = None
- self.presence_daemon_lock = None
- self.datadir = os.path.join(self.root, "data")
- pubkey_fingerprint_filename = os.path.join(self.datadir, "mypubkey")
- with open(pubkey_fingerprint_filename) as f:
- self.pubkey_fingerprint = f.read().strip()
- self.relative_perf_filename = os.path.join(self.datadir, "myperf")
- if os.path.exists(self.relative_perf_filename):
- with open(self.relative_perf_filename) as f:
- try:
- self.relative_perf = float(f.read())
- except:
- self.relative_perf = 1.0
- else:
- self.relative_perf = 1.0
-
- def run(self):
- os.nice(20)
- self.ip = presence_handler.GetOwnIP()
- self.perf_data_manager = perfdata.PerfDataManager(self.datadir)
- self.perf_data_lock = threading.Lock()
-
- self.local_handler = local_handler.LocalSocketServer(self)
- self.local_handler_thread = threading.Thread(
- target=self.local_handler.serve_forever)
- self.local_handler_thread.start()
-
- self.work_handler = work_handler.WorkSocketServer(self)
- self.work_handler_thread = threading.Thread(
- target=self.work_handler.serve_forever)
- self.work_handler_thread.start()
-
- self.status_handler = status_handler.StatusSocketServer(self)
- self.status_handler_thread = threading.Thread(
- target=self.status_handler.serve_forever)
- self.status_handler_thread.start()
-
- self.presence_daemon = presence_handler.PresenceDaemon(self)
- self.presence_daemon_thread = threading.Thread(
- target=self.presence_daemon.serve_forever)
- self.presence_daemon_thread.start()
-
- self.presence_daemon.FindPeers()
- time.sleep(0.5) # Give those peers some time to reply.
-
- with self.peer_list_lock:
- for p in self.peers:
- if p.address == self.ip: continue
- status_handler.RequestTrustedPubkeys(p, self)
-
- while True:
- try:
- self.PeriodicTasks()
- time.sleep(60)
- except Exception, e:
- print("MAIN LOOP EXCEPTION: %s" % e)
- self.Shutdown()
- break
- except KeyboardInterrupt:
- self.Shutdown()
- break
-
- def Shutdown(self):
- with open(self.relative_perf_filename, "w") as f:
- f.write("%s" % self.relative_perf)
- self.presence_daemon.shutdown()
- self.presence_daemon.server_close()
- self.local_handler.shutdown()
- self.local_handler.server_close()
- self.work_handler.shutdown()
- self.work_handler.server_close()
- self.status_handler.shutdown()
- self.status_handler.server_close()
-
- def PeriodicTasks(self):
- # If we know peers we don't trust, see if someone else trusts them.
- with self.peer_list_lock:
- for p in self.peers:
- if p.trusted: continue
- if self.IsTrusted(p.pubkey):
- p.trusted = True
- status_handler.ITrustYouNow(p)
- continue
- for p2 in self.peers:
- if not p2.trusted: continue
- status_handler.TryTransitiveTrust(p2, p.pubkey, self)
- # TODO: Ping for more peers waiting to be discovered.
- # TODO: Update the checkout (if currently idle).
-
- def AddPeer(self, peer):
- with self.peer_list_lock:
- for p in self.peers:
- if p.address == peer.address:
- return
- self.peers.append(peer)
- if peer.trusted:
- status_handler.ITrustYouNow(peer)
-
- def DeletePeer(self, peer_address):
- with self.peer_list_lock:
- for i in xrange(len(self.peers)):
- if self.peers[i].address == peer_address:
- del self.peers[i]
- return
-
- def MarkPeerAsTrusting(self, peer_address):
- with self.peer_list_lock:
- for p in self.peers:
- if p.address == peer_address:
- p.trusting_me = True
- break
-
- def UpdatePeerPerformance(self, peer_address, performance):
- with self.peer_list_lock:
- for p in self.peers:
- if p.address == peer_address:
- p.relative_performance = performance
-
- def CopyToTrusted(self, pubkey_filename):
- with open(pubkey_filename, "r") as f:
- lines = f.readlines()
- fingerprint = lines[-1].strip()
- target_filename = self._PubkeyFilename(fingerprint)
- shutil.copy(pubkey_filename, target_filename)
- with self.peer_list_lock:
- for peer in self.peers:
- if peer.address == self.ip: continue
- if peer.pubkey == fingerprint:
- status_handler.ITrustYouNow(peer)
- else:
- result = self.SignTrusted(fingerprint)
- status_handler.NotifyNewTrusted(peer, result)
- return fingerprint
-
- def _PubkeyFilename(self, pubkey_fingerprint):
- return os.path.join(self.root, "trusted", "%s.pem" % pubkey_fingerprint)
-
- def IsTrusted(self, pubkey_fingerprint):
- return os.path.exists(self._PubkeyFilename(pubkey_fingerprint))
-
- def ListTrusted(self):
- path = os.path.join(self.root, "trusted")
- if not os.path.exists(path): return []
- return [ f[:-4] for f in os.listdir(path) if f.endswith(".pem") ]
-
- def SignTrusted(self, pubkey_fingerprint):
- if not self.IsTrusted(pubkey_fingerprint):
- return []
- filename = self._PubkeyFilename(pubkey_fingerprint)
- result = signatures.ReadFileAndSignature(filename) # Format: [key, sig].
- return [pubkey_fingerprint, result[0], result[1], self.pubkey_fingerprint]
-
- def AcceptNewTrusted(self, data):
- # The format of |data| matches the return value of |SignTrusted()|.
- if not data: return
- fingerprint = data[0]
- pubkey = data[1]
- signature = data[2]
- signer = data[3]
- if not self.IsTrusted(signer):
- return
- if self.IsTrusted(fingerprint):
- return # Already trusted.
- filename = self._PubkeyFilename(fingerprint)
- signer_pubkeyfile = self._PubkeyFilename(signer)
- if not signatures.VerifySignature(filename, pubkey, signature,
- signer_pubkeyfile):
- return
- return # Nothing more to do.
-
- def AddPerfData(self, test_key, duration, arch, mode):
- data_store = self.perf_data_manager.GetStore(arch, mode)
- data_store.RawUpdatePerfData(str(test_key), duration)
-
- def CompareOwnPerf(self, test, arch, mode):
- data_store = self.perf_data_manager.GetStore(arch, mode)
- observed = data_store.FetchPerfData(test)
- if not observed: return
- own_perf_estimate = observed / test.duration
- with self.perf_data_lock:
- kLearnRateLimiter = 9999
- self.relative_perf *= kLearnRateLimiter
- self.relative_perf += own_perf_estimate
- self.relative_perf /= (kLearnRateLimiter + 1)
diff --git a/src/v8/tools/testrunner/server/presence_handler.py b/src/v8/tools/testrunner/server/presence_handler.py
deleted file mode 100644
index 1dc2ef1..0000000
--- a/src/v8/tools/testrunner/server/presence_handler.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import socket
-import SocketServer
-import threading
-try:
- import ujson as json
-except:
- import json
-
-from . import constants
-from ..objects import peer
-
-
-STARTUP_REQUEST = "V8 test peer starting up"
-STARTUP_RESPONSE = "Let's rock some tests!"
-EXIT_REQUEST = "V8 testing peer going down"
-
-
-def GetOwnIP():
- s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
- s.connect(("8.8.8.8", 80))
- ip = s.getsockname()[0]
- s.close()
- return ip
-
-
-class PresenceHandler(SocketServer.BaseRequestHandler):
-
- def handle(self):
- data = json.loads(self.request[0].strip())
-
- if data[0] == STARTUP_REQUEST:
- jobs = data[1]
- relative_perf = data[2]
- pubkey_fingerprint = data[3]
- trusted = self.server.daemon.IsTrusted(pubkey_fingerprint)
- response = [STARTUP_RESPONSE, self.server.daemon.jobs,
- self.server.daemon.relative_perf,
- self.server.daemon.pubkey_fingerprint, trusted]
- response = json.dumps(response)
- self.server.SendTo(self.client_address[0], response)
- p = peer.Peer(self.client_address[0], jobs, relative_perf,
- pubkey_fingerprint)
- p.trusted = trusted
- self.server.daemon.AddPeer(p)
-
- elif data[0] == STARTUP_RESPONSE:
- jobs = data[1]
- perf = data[2]
- pubkey_fingerprint = data[3]
- p = peer.Peer(self.client_address[0], jobs, perf, pubkey_fingerprint)
- p.trusted = self.server.daemon.IsTrusted(pubkey_fingerprint)
- p.trusting_me = data[4]
- self.server.daemon.AddPeer(p)
-
- elif data[0] == EXIT_REQUEST:
- self.server.daemon.DeletePeer(self.client_address[0])
- if self.client_address[0] == self.server.daemon.ip:
- self.server.shutdown_lock.release()
-
-
-class PresenceDaemon(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
- def __init__(self, daemon):
- self.daemon = daemon
- address = (daemon.ip, constants.PRESENCE_PORT)
- SocketServer.UDPServer.__init__(self, address, PresenceHandler)
- self.shutdown_lock = threading.Lock()
-
- def shutdown(self):
- self.shutdown_lock.acquire()
- self.SendToAll(json.dumps([EXIT_REQUEST]))
- self.shutdown_lock.acquire()
- self.shutdown_lock.release()
- SocketServer.UDPServer.shutdown(self)
-
- def SendTo(self, target, message):
- sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
- sock.sendto(message, (target, constants.PRESENCE_PORT))
- sock.close()
-
- def SendToAll(self, message):
- sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
- ip = self.daemon.ip.split(".")
- for i in range(1, 254):
- ip[-1] = str(i)
- sock.sendto(message, (".".join(ip), constants.PRESENCE_PORT))
- sock.close()
-
- def FindPeers(self):
- request = [STARTUP_REQUEST, self.daemon.jobs, self.daemon.relative_perf,
- self.daemon.pubkey_fingerprint]
- request = json.dumps(request)
- self.SendToAll(request)
diff --git a/src/v8/tools/testrunner/server/signatures.py b/src/v8/tools/testrunner/server/signatures.py
deleted file mode 100644
index 9957a18..0000000
--- a/src/v8/tools/testrunner/server/signatures.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import base64
-import os
-import subprocess
-
-
-def ReadFileAndSignature(filename):
- with open(filename, "rb") as f:
- file_contents = base64.b64encode(f.read())
- signature_file = filename + ".signature"
- if (not os.path.exists(signature_file) or
- os.path.getmtime(signature_file) < os.path.getmtime(filename)):
- private_key = "~/.ssh/v8_dtest"
- code = subprocess.call("openssl dgst -out %s -sign %s %s" %
- (signature_file, private_key, filename),
- shell=True)
- if code != 0: return [None, code]
- with open(signature_file) as f:
- signature = base64.b64encode(f.read())
- return [file_contents, signature]
-
-
-def VerifySignature(filename, file_contents, signature, pubkeyfile):
- with open(filename, "wb") as f:
- f.write(base64.b64decode(file_contents))
- signature_file = filename + ".foreign_signature"
- with open(signature_file, "wb") as f:
- f.write(base64.b64decode(signature))
- code = subprocess.call("openssl dgst -verify %s -signature %s %s" %
- (pubkeyfile, signature_file, filename),
- shell=True)
- matched = (code == 0)
- if not matched:
- os.remove(signature_file)
- os.remove(filename)
- return matched
diff --git a/src/v8/tools/testrunner/server/status_handler.py b/src/v8/tools/testrunner/server/status_handler.py
deleted file mode 100644
index 3f2271d..0000000
--- a/src/v8/tools/testrunner/server/status_handler.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import socket
-import SocketServer
-
-from . import compression
-from . import constants
-
-
-def _StatusQuery(peer, query):
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- code = sock.connect_ex((peer.address, constants.STATUS_PORT))
- if code != 0:
- # TODO(jkummerow): disconnect (after 3 failures?)
- return
- compression.Send(query, sock)
- compression.Send(constants.END_OF_STREAM, sock)
- rec = compression.Receiver(sock)
- data = None
- while not rec.IsDone():
- data = rec.Current()
- assert data[0] == query[0]
- data = data[1]
- rec.Advance()
- sock.close()
- return data
-
-
-def RequestTrustedPubkeys(peer, server):
- pubkey_list = _StatusQuery(peer, [constants.LIST_TRUSTED_PUBKEYS])
- for pubkey in pubkey_list:
- if server.IsTrusted(pubkey): continue
- result = _StatusQuery(peer, [constants.GET_SIGNED_PUBKEY, pubkey])
- server.AcceptNewTrusted(result)
-
-
-def NotifyNewTrusted(peer, data):
- _StatusQuery(peer, [constants.NOTIFY_NEW_TRUSTED] + data)
-
-
-def ITrustYouNow(peer):
- _StatusQuery(peer, [constants.TRUST_YOU_NOW])
-
-
-def TryTransitiveTrust(peer, pubkey, server):
- if _StatusQuery(peer, [constants.DO_YOU_TRUST, pubkey]):
- result = _StatusQuery(peer, [constants.GET_SIGNED_PUBKEY, pubkey])
- server.AcceptNewTrusted(result)
-
-
-class StatusHandler(SocketServer.BaseRequestHandler):
- def handle(self):
- rec = compression.Receiver(self.request)
- while not rec.IsDone():
- data = rec.Current()
- action = data[0]
-
- if action == constants.LIST_TRUSTED_PUBKEYS:
- response = self.server.daemon.ListTrusted()
- compression.Send([action, response], self.request)
-
- elif action == constants.GET_SIGNED_PUBKEY:
- response = self.server.daemon.SignTrusted(data[1])
- compression.Send([action, response], self.request)
-
- elif action == constants.NOTIFY_NEW_TRUSTED:
- self.server.daemon.AcceptNewTrusted(data[1:])
- pass # No response.
-
- elif action == constants.TRUST_YOU_NOW:
- self.server.daemon.MarkPeerAsTrusting(self.client_address[0])
- pass # No response.
-
- elif action == constants.DO_YOU_TRUST:
- response = self.server.daemon.IsTrusted(data[1])
- compression.Send([action, response], self.request)
-
- rec.Advance()
- compression.Send(constants.END_OF_STREAM, self.request)
-
-
-class StatusSocketServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
- def __init__(self, daemon):
- address = (daemon.ip, constants.STATUS_PORT)
- SocketServer.TCPServer.__init__(self, address, StatusHandler)
- self.daemon = daemon
diff --git a/src/v8/tools/testrunner/server/work_handler.py b/src/v8/tools/testrunner/server/work_handler.py
deleted file mode 100644
index 6bf7d43..0000000
--- a/src/v8/tools/testrunner/server/work_handler.py
+++ /dev/null
@@ -1,150 +0,0 @@
-# Copyright 2012 the V8 project authors. All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following
-# disclaimer in the documentation and/or other materials provided
-# with the distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import os
-import SocketServer
-import stat
-import subprocess
-import threading
-
-from . import compression
-from . import constants
-from . import signatures
-from ..network import endpoint
-from ..objects import workpacket
-
-
-class WorkHandler(SocketServer.BaseRequestHandler):
-
- def handle(self):
- rec = compression.Receiver(self.request)
- while not rec.IsDone():
- data = rec.Current()
- with self.server.job_lock:
- self._WorkOnWorkPacket(data)
- rec.Advance()
-
- def _WorkOnWorkPacket(self, data):
- server_root = self.server.daemon.root
- v8_root = os.path.join(server_root, "v8")
- os.chdir(v8_root)
- packet = workpacket.WorkPacket.Unpack(data)
- self.ctx = packet.context
- self.ctx.shell_dir = os.path.join("out",
- "%s.%s" % (self.ctx.arch, self.ctx.mode))
- if not os.path.isdir(self.ctx.shell_dir):
- os.makedirs(self.ctx.shell_dir)
- for binary in packet.binaries:
- if not self._UnpackBinary(binary, packet.pubkey_fingerprint):
- return
-
- if not self._CheckoutRevision(packet.base_revision):
- return
-
- if not self._ApplyPatch(packet.patch):
- return
-
- tests = packet.tests
- endpoint.Execute(v8_root, self.ctx, tests, self.request, self.server.daemon)
- self._SendResponse()
-
- def _SendResponse(self, error_message=None):
- try:
- if error_message:
- compression.Send([[-1, error_message]], self.request)
- compression.Send(constants.END_OF_STREAM, self.request)
- return
- except Exception, e:
- pass # Peer is gone. There's nothing we can do.
- # Clean up.
- self._Call("git checkout -f")
- self._Call("git clean -f -d")
- self._Call("rm -rf %s" % self.ctx.shell_dir)
-
- def _UnpackBinary(self, binary, pubkey_fingerprint):
- binary_name = binary["name"]
- if binary_name == "libv8.so":
- libdir = os.path.join(self.ctx.shell_dir, "lib.target")
- if not os.path.exists(libdir): os.makedirs(libdir)
- target = os.path.join(libdir, binary_name)
- else:
- target = os.path.join(self.ctx.shell_dir, binary_name)
- pubkeyfile = "../trusted/%s.pem" % pubkey_fingerprint
- if not signatures.VerifySignature(target, binary["blob"],
- binary["sign"], pubkeyfile):
- self._SendResponse("Signature verification failed")
- return False
- os.chmod(target, stat.S_IRWXU)
- return True
-
- def _CheckoutRevision(self, base_svn_revision):
- get_hash_cmd = (
- "git log -1 --format=%%H --remotes --grep='^git-svn-id:.*@%s'" %
- base_svn_revision)
- try:
- base_revision = subprocess.check_output(get_hash_cmd, shell=True)
- if not base_revision: raise ValueError
- except:
- self._Call("git fetch")
- try:
- base_revision = subprocess.check_output(get_hash_cmd, shell=True)
- if not base_revision: raise ValueError
- except:
- self._SendResponse("Base revision not found.")
- return False
- code = self._Call("git checkout -f %s" % base_revision)
- if code != 0:
- self._SendResponse("Error trying to check out base revision.")
- return False
- code = self._Call("git clean -f -d")
- if code != 0:
- self._SendResponse("Failed to reset checkout")
- return False
- return True
-
- def _ApplyPatch(self, patch):
- if not patch: return True # Just skip if the patch is empty.
- patchfilename = "_dtest_incoming_patch.patch"
- with open(patchfilename, "w") as f:
- f.write(patch)
- code = self._Call("git apply %s" % patchfilename)
- if code != 0:
- self._SendResponse("Error applying patch.")
- return False
- return True
-
- def _Call(self, cmd):
- return subprocess.call(cmd, shell=True)
-
-
-class WorkSocketServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
- def __init__(self, daemon):
- address = (daemon.ip, constants.PEER_PORT)
- SocketServer.TCPServer.__init__(self, address, WorkHandler)
- self.job_lock = threading.Lock()
- self.daemon = daemon
diff --git a/src/v8/tools/testrunner/standard_runner.py b/src/v8/tools/testrunner/standard_runner.py
new file mode 100755
index 0000000..3be2099
--- /dev/null
+++ b/src/v8/tools/testrunner/standard_runner.py
@@ -0,0 +1,599 @@
+#!/usr/bin/env python
+#
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+
+from collections import OrderedDict
+from os.path import join
+import multiprocessing
+import os
+import random
+import shlex
+import subprocess
+import sys
+import time
+
+# Adds testrunner to the path hence it has to be imported at the beggining.
+import base_runner
+
+from testrunner.local import execution
+from testrunner.local import progress
+from testrunner.local import testsuite
+from testrunner.local import utils
+from testrunner.local import verbose
+from testrunner.local.variants import ALL_VARIANTS
+from testrunner.objects import context
+from testrunner.objects import predictable
+from testrunner.testproc.execution import ExecutionProc
+from testrunner.testproc.filter import StatusFileFilterProc, NameFilterProc
+from testrunner.testproc.loader import LoadProc
+from testrunner.testproc.progress import (VerboseProgressIndicator,
+ ResultsTracker,
+ TestsCounter)
+from testrunner.testproc.rerun import RerunProc
+from testrunner.testproc.variant import VariantProc
+
+
+TIMEOUT_DEFAULT = 60
+
+# Variants ordered by expected runtime (slowest first).
+VARIANTS = ["default"]
+
+MORE_VARIANTS = [
+ "stress",
+ "stress_incremental_marking",
+ "nooptimization",
+ "stress_background_compile",
+ "wasm_traps",
+]
+
+VARIANT_ALIASES = {
+ # The default for developer workstations.
+ "dev": VARIANTS,
+ # Additional variants, run on all bots.
+ "more": MORE_VARIANTS,
+ # Shortcut for the two above ("more" first - it has the longer running tests).
+ "exhaustive": MORE_VARIANTS + VARIANTS,
+ # Additional variants, run on a subset of bots.
+ "extra": ["future", "liftoff", "trusted"],
+}
+
+GC_STRESS_FLAGS = ["--gc-interval=500", "--stress-compaction",
+ "--concurrent-recompilation-queue-length=64",
+ "--concurrent-recompilation-delay=500",
+ "--concurrent-recompilation"]
+
+# Double the timeout for these:
+SLOW_ARCHS = ["arm",
+ "mips",
+ "mipsel",
+ "mips64",
+ "mips64el",
+ "s390",
+ "s390x",
+ "arm64"]
+
+PREDICTABLE_WRAPPER = os.path.join(
+ base_runner.BASE_DIR, 'tools', 'predictable_wrapper.py')
+
+
+class StandardTestRunner(base_runner.BaseTestRunner):
+ def __init__(self, *args, **kwargs):
+ super(StandardTestRunner, self).__init__(*args, **kwargs)
+
+ self.sancov_dir = None
+
+ def _get_default_suite_names(self):
+ return ['default']
+
+ def _do_execute(self, suites, args, options):
+ if options.swarming:
+ # Swarming doesn't print how isolated commands are called. Lets make
+ # this less cryptic by printing it ourselves.
+ print ' '.join(sys.argv)
+
+ if utils.GuessOS() == "macos":
+ # TODO(machenbach): Temporary output for investigating hanging test
+ # driver on mac.
+ print "V8 related processes running on this host:"
+ try:
+ print subprocess.check_output(
+ "ps -e | egrep 'd8|cctest|unittests'", shell=True)
+ except Exception:
+ pass
+
+ return self._execute(args, options, suites)
+
+ def _add_parser_options(self, parser):
+ parser.add_option("--sancov-dir",
+ help="Directory where to collect coverage data")
+ parser.add_option("--cfi-vptr",
+ help="Run tests with UBSAN cfi_vptr option.",
+ default=False, action="store_true")
+ parser.add_option("--novfp3",
+ help="Indicates that V8 was compiled without VFP3"
+ " support",
+ default=False, action="store_true")
+ parser.add_option("--cat", help="Print the source of the tests",
+ default=False, action="store_true")
+ parser.add_option("--slow-tests",
+ help="Regard slow tests (run|skip|dontcare)",
+ default="dontcare")
+ parser.add_option("--pass-fail-tests",
+ help="Regard pass|fail tests (run|skip|dontcare)",
+ default="dontcare")
+ parser.add_option("--gc-stress",
+ help="Switch on GC stress mode",
+ default=False, action="store_true")
+ parser.add_option("--command-prefix",
+ help="Prepended to each shell command used to run a"
+ " test",
+ default="")
+ parser.add_option("--extra-flags",
+ help="Additional flags to pass to each test command",
+ action="append", default=[])
+ parser.add_option("--infra-staging", help="Use new test runner features",
+ default=False, action="store_true")
+ parser.add_option("--isolates", help="Whether to test isolates",
+ default=False, action="store_true")
+ parser.add_option("-j", help="The number of parallel tasks to run",
+ default=0, type="int")
+ parser.add_option("--no-harness", "--noharness",
+ help="Run without test harness of a given suite",
+ default=False, action="store_true")
+ parser.add_option("--no-presubmit", "--nopresubmit",
+ help='Skip presubmit checks (deprecated)',
+ default=False, dest="no_presubmit", action="store_true")
+ parser.add_option("--no-sorting", "--nosorting",
+ help="Don't sort tests according to duration of last"
+ " run.",
+ default=False, dest="no_sorting", action="store_true")
+ parser.add_option("--no-variants", "--novariants",
+ help="Deprecated. "
+ "Equivalent to passing --variants=default",
+ default=False, dest="no_variants", action="store_true")
+ parser.add_option("--variants",
+ help="Comma-separated list of testing variants;"
+ " default: \"%s\"" % ",".join(VARIANTS))
+ parser.add_option("--exhaustive-variants",
+ default=False, action="store_true",
+ help="Deprecated. "
+ "Equivalent to passing --variants=exhaustive")
+ parser.add_option("-p", "--progress",
+ help=("The style of progress indicator"
+ " (verbose, dots, color, mono)"),
+ choices=progress.PROGRESS_INDICATORS.keys(),
+ default="mono")
+ parser.add_option("--quickcheck", default=False, action="store_true",
+ help=("Quick check mode (skip slow tests)"))
+ parser.add_option("--report", help="Print a summary of the tests to be"
+ " run",
+ default=False, action="store_true")
+ parser.add_option("--json-test-results",
+ help="Path to a file for storing json results.")
+ parser.add_option("--flakiness-results",
+ help="Path to a file for storing flakiness json.")
+ parser.add_option("--rerun-failures-count",
+ help=("Number of times to rerun each failing test case."
+ " Very slow tests will be rerun only once."),
+ default=0, type="int")
+ parser.add_option("--rerun-failures-max",
+ help="Maximum number of failing test cases to rerun.",
+ default=100, type="int")
+ parser.add_option("--dont-skip-slow-simulator-tests",
+ help="Don't skip more slow tests when using a"
+ " simulator.",
+ default=False, action="store_true",
+ dest="dont_skip_simulator_slow_tests")
+ parser.add_option("--swarming",
+ help="Indicates running test driver on swarming.",
+ default=False, action="store_true")
+ parser.add_option("--time", help="Print timing information after running",
+ default=False, action="store_true")
+ parser.add_option("-t", "--timeout", help="Timeout in seconds",
+ default=TIMEOUT_DEFAULT, type="int")
+ parser.add_option("--warn-unused", help="Report unused rules",
+ default=False, action="store_true")
+ parser.add_option("--junitout", help="File name of the JUnit output")
+ parser.add_option("--junittestsuite",
+ help="The testsuite name in the JUnit output file",
+ default="v8tests")
+ parser.add_option("--random-seed", default=0, dest="random_seed",
+ help="Default seed for initializing random generator",
+ type=int)
+ parser.add_option("--random-seed-stress-count", default=1, type="int",
+ dest="random_seed_stress_count",
+ help="Number of runs with different random seeds")
+
+ def _process_options(self, options):
+ global VARIANTS
+
+ if options.sancov_dir:
+ self.sancov_dir = options.sancov_dir
+ if not os.path.exists(self.sancov_dir):
+ print("sancov-dir %s doesn't exist" % self.sancov_dir)
+ raise base_runner.TestRunnerError()
+
+ options.command_prefix = shlex.split(options.command_prefix)
+ options.extra_flags = sum(map(shlex.split, options.extra_flags), [])
+
+ if options.gc_stress:
+ options.extra_flags += GC_STRESS_FLAGS
+
+ if self.build_config.asan:
+ options.extra_flags.append("--invoke-weak-callbacks")
+ options.extra_flags.append("--omit-quit")
+
+ if options.novfp3:
+ options.extra_flags.append("--noenable-vfp3")
+
+ if options.no_variants: # pragma: no cover
+ print ("Option --no-variants is deprecated. "
+ "Pass --variants=default instead.")
+ assert not options.variants
+ options.variants = "default"
+
+ if options.exhaustive_variants: # pragma: no cover
+ # TODO(machenbach): Switch infra to --variants=exhaustive after M65.
+ print ("Option --exhaustive-variants is deprecated. "
+ "Pass --variants=exhaustive instead.")
+ # This is used on many bots. It includes a larger set of default
+ # variants.
+ # Other options for manipulating variants still apply afterwards.
+ assert not options.variants
+ options.variants = "exhaustive"
+
+ if options.quickcheck:
+ assert not options.variants
+ options.variants = "stress,default"
+ options.slow_tests = "skip"
+ options.pass_fail_tests = "skip"
+
+ if self.build_config.predictable:
+ options.variants = "default"
+ options.extra_flags.append("--predictable")
+ options.extra_flags.append("--verify_predictable")
+ options.extra_flags.append("--no-inline-new")
+ # Add predictable wrapper to command prefix.
+ options.command_prefix = (
+ [sys.executable, PREDICTABLE_WRAPPER] + options.command_prefix)
+
+ # TODO(machenbach): Figure out how to test a bigger subset of variants on
+ # msan.
+ if self.build_config.msan:
+ options.variants = "default"
+
+ if options.j == 0:
+ options.j = multiprocessing.cpu_count()
+
+ if options.random_seed_stress_count <= 1 and options.random_seed == 0:
+ options.random_seed = self._random_seed()
+
+ # Use developer defaults if no variant was specified.
+ options.variants = options.variants or "dev"
+
+ if options.variants == "infra_staging":
+ options.variants = "exhaustive"
+ options.infra_staging = True
+
+ # Resolve variant aliases and dedupe.
+ # TODO(machenbach): Don't mutate global variable. Rather pass mutated
+ # version as local variable.
+ VARIANTS = list(set(reduce(
+ list.__add__,
+ (VARIANT_ALIASES.get(v, [v]) for v in options.variants.split(",")),
+ [],
+ )))
+
+ if not set(VARIANTS).issubset(ALL_VARIANTS):
+ print "All variants must be in %s" % str(ALL_VARIANTS)
+ raise base_runner.TestRunnerError()
+
+ def CheckTestMode(name, option): # pragma: no cover
+ if not option in ["run", "skip", "dontcare"]:
+ print "Unknown %s mode %s" % (name, option)
+ raise base_runner.TestRunnerError()
+ CheckTestMode("slow test", options.slow_tests)
+ CheckTestMode("pass|fail test", options.pass_fail_tests)
+ if self.build_config.no_i18n:
+ base_runner.TEST_MAP["bot_default"].remove("intl")
+ base_runner.TEST_MAP["default"].remove("intl")
+ # TODO(machenbach): uncomment after infra side lands.
+ # base_runner.TEST_MAP["d8_default"].remove("intl")
+
+ def _setup_env(self):
+ super(StandardTestRunner, self)._setup_env()
+
+ symbolizer_option = self._get_external_symbolizer_option()
+
+ if self.sancov_dir:
+ os.environ['ASAN_OPTIONS'] = ":".join([
+ 'coverage=1',
+ 'coverage_dir=%s' % self.sancov_dir,
+ symbolizer_option,
+ "allow_user_segv_handler=1",
+ ])
+
+ def _random_seed(self):
+ seed = 0
+ while not seed:
+ seed = random.SystemRandom().randint(-2147483648, 2147483647)
+ return seed
+
+ def _execute(self, args, options, suites):
+ print(">>> Running tests for %s.%s" % (self.build_config.arch,
+ self.mode_name))
+ # Populate context object.
+
+ # Simulators are slow, therefore allow a longer timeout.
+ if self.build_config.arch in SLOW_ARCHS:
+ options.timeout *= 2
+
+ options.timeout *= self.mode_options.timeout_scalefactor
+
+ if self.build_config.predictable:
+ # Predictable mode is slower.
+ options.timeout *= 2
+
+ ctx = context.Context(self.build_config.arch,
+ self.mode_options.execution_mode,
+ self.outdir,
+ self.mode_options.flags,
+ options.verbose,
+ options.timeout,
+ options.isolates,
+ options.command_prefix,
+ options.extra_flags,
+ self.build_config.no_i18n,
+ options.random_seed,
+ options.no_sorting,
+ options.rerun_failures_count,
+ options.rerun_failures_max,
+ options.no_harness,
+ use_perf_data=not options.swarming,
+ sancov_dir=self.sancov_dir,
+ infra_staging=options.infra_staging)
+
+ # TODO(all): Combine "simulator" and "simulator_run".
+ # TODO(machenbach): In GN we can derive simulator run from
+ # target_arch != v8_target_arch in the dumped build config.
+ simulator_run = (
+ not options.dont_skip_simulator_slow_tests and
+ self.build_config.arch in [
+ 'arm64', 'arm', 'mipsel', 'mips', 'mips64', 'mips64el', 'ppc',
+ 'ppc64', 's390', 's390x'] and
+ bool(base_runner.ARCH_GUESS) and
+ self.build_config.arch != base_runner.ARCH_GUESS)
+ # Find available test suites and read test cases from them.
+ variables = {
+ "arch": self.build_config.arch,
+ "asan": self.build_config.asan,
+ "byteorder": sys.byteorder,
+ "dcheck_always_on": self.build_config.dcheck_always_on,
+ "deopt_fuzzer": False,
+ "gc_fuzzer": False,
+ "gc_stress": options.gc_stress,
+ "gcov_coverage": self.build_config.gcov_coverage,
+ "isolates": options.isolates,
+ "mode": self.mode_options.status_mode,
+ "msan": self.build_config.msan,
+ "no_harness": options.no_harness,
+ "no_i18n": self.build_config.no_i18n,
+ "no_snap": self.build_config.no_snap,
+ "novfp3": options.novfp3,
+ "predictable": self.build_config.predictable,
+ "simulator": utils.UseSimulator(self.build_config.arch),
+ "simulator_run": simulator_run,
+ "system": utils.GuessOS(),
+ "tsan": self.build_config.tsan,
+ "ubsan_vptr": self.build_config.ubsan_vptr,
+ }
+
+ progress_indicator = progress.IndicatorNotifier()
+ progress_indicator.Register(
+ progress.PROGRESS_INDICATORS[options.progress]())
+ if options.junitout: # pragma: no cover
+ progress_indicator.Register(progress.JUnitTestProgressIndicator(
+ options.junitout, options.junittestsuite))
+ if options.json_test_results:
+ progress_indicator.Register(progress.JsonTestProgressIndicator(
+ options.json_test_results,
+ self.build_config.arch,
+ self.mode_options.execution_mode,
+ ctx.random_seed))
+ if options.flakiness_results: # pragma: no cover
+ progress_indicator.Register(progress.FlakinessTestProgressIndicator(
+ options.flakiness_results))
+
+ if options.infra_staging:
+ for s in suites:
+ s.ReadStatusFile(variables)
+ s.ReadTestCases(ctx)
+
+ return self._run_test_procs(suites, args, options, progress_indicator,
+ ctx)
+
+ all_tests = []
+ num_tests = 0
+ for s in suites:
+ s.ReadStatusFile(variables)
+ s.ReadTestCases(ctx)
+ if len(args) > 0:
+ s.FilterTestCasesByArgs(args)
+ all_tests += s.tests
+
+ # First filtering by status applying the generic rules (tests without
+ # variants)
+ if options.warn_unused:
+ tests = [(t.name, t.variant) for t in s.tests]
+ s.statusfile.warn_unused_rules(tests, check_variant_rules=False)
+ s.FilterTestCasesByStatus(options.slow_tests, options.pass_fail_tests)
+
+ if options.cat:
+ verbose.PrintTestSource(s.tests)
+ continue
+ variant_gen = s.CreateLegacyVariantsGenerator(VARIANTS)
+ variant_tests = [ t.create_variant(v, flags)
+ for t in s.tests
+ for v in variant_gen.FilterVariantsByTest(t)
+ for flags in variant_gen.GetFlagSets(t, v) ]
+
+ if options.random_seed_stress_count > 1:
+ # Duplicate test for random seed stress mode.
+ def iter_seed_flags():
+ for _ in range(0, options.random_seed_stress_count):
+ # Use given random seed for all runs (set by default in
+ # execution.py) or a new random seed if none is specified.
+ if options.random_seed:
+ yield []
+ else:
+ yield ["--random-seed=%d" % self._random_seed()]
+ s.tests = [
+ t.create_variant(t.variant, flags, 'seed-stress-%d' % n)
+ for t in variant_tests
+ for n, flags in enumerate(iter_seed_flags())
+ ]
+ else:
+ s.tests = variant_tests
+
+ # Second filtering by status applying also the variant-dependent rules.
+ if options.warn_unused:
+ tests = [(t.name, t.variant) for t in s.tests]
+ s.statusfile.warn_unused_rules(tests, check_variant_rules=True)
+
+ s.FilterTestCasesByStatus(options.slow_tests, options.pass_fail_tests)
+ s.tests = self._shard_tests(s.tests, options)
+
+ for t in s.tests:
+ t.cmd = t.get_command(ctx)
+
+ num_tests += len(s.tests)
+
+ if options.cat:
+ return 0 # We're done here.
+
+ if options.report:
+ verbose.PrintReport(all_tests)
+
+ # Run the tests.
+ start_time = time.time()
+
+ if self.build_config.predictable:
+ outproc_factory = predictable.get_outproc
+ else:
+ outproc_factory = None
+
+ runner = execution.Runner(suites, progress_indicator, ctx,
+ outproc_factory)
+ exit_code = runner.Run(options.j)
+ overall_duration = time.time() - start_time
+
+ if options.time:
+ verbose.PrintTestDurations(suites, runner.outputs, overall_duration)
+
+ if num_tests == 0:
+ print("Warning: no tests were run!")
+
+ if exit_code == 1 and options.json_test_results:
+ print("Force exit code 0 after failures. Json test results file "
+ "generated with failure information.")
+ exit_code = 0
+
+ if self.sancov_dir:
+ # If tests ran with sanitizer coverage, merge coverage files in the end.
+ try:
+ print "Merging sancov files."
+ subprocess.check_call([
+ sys.executable,
+ join(self.basedir, "tools", "sanitizers", "sancov_merger.py"),
+ "--coverage-dir=%s" % self.sancov_dir])
+ except:
+ print >> sys.stderr, "Error: Merging sancov files failed."
+ exit_code = 1
+
+ return exit_code
+
+ def _shard_tests(self, tests, options):
+ shard_run, shard_count = self._get_shard_info(options)
+
+ if shard_count < 2:
+ return tests
+ count = 0
+ shard = []
+ for test in tests:
+ if count % shard_count == shard_run - 1:
+ shard.append(test)
+ count += 1
+ return shard
+
+ def _run_test_procs(self, suites, args, options, progress_indicator,
+ context):
+ jobs = options.j
+
+ print '>>> Running with test processors'
+ loader = LoadProc()
+ tests_counter = TestsCounter()
+ results = ResultsTracker()
+ indicators = progress_indicator.ToProgressIndicatorProcs()
+ execproc = ExecutionProc(jobs, context)
+
+ procs = [
+ loader,
+ NameFilterProc(args) if args else None,
+ StatusFileFilterProc(options.slow_tests, options.pass_fail_tests),
+ self._create_shard_proc(options),
+ tests_counter,
+ VariantProc(VARIANTS),
+ StatusFileFilterProc(options.slow_tests, options.pass_fail_tests),
+ ] + indicators + [
+ results,
+ self._create_rerun_proc(context),
+ execproc,
+ ]
+
+ procs = filter(None, procs)
+
+ for i in xrange(0, len(procs) - 1):
+ procs[i].connect_to(procs[i + 1])
+
+ tests = [t for s in suites for t in s.tests]
+ tests.sort(key=lambda t: t.is_slow, reverse=True)
+
+ loader.setup()
+ loader.load_tests(tests)
+
+ print '>>> Running %d base tests' % tests_counter.total
+ tests_counter.remove_from_chain()
+
+ execproc.start()
+
+ for indicator in indicators:
+ indicator.finished()
+
+ print '>>> %d tests ran' % results.total
+
+ exit_code = 0
+ if results.failed:
+ exit_code = 1
+ if results.remaining:
+ exit_code = 2
+
+
+ if exit_code == 1 and options.json_test_results:
+ print("Force exit code 0 after failures. Json test results file "
+ "generated with failure information.")
+ exit_code = 0
+ return exit_code
+
+ def _create_rerun_proc(self, ctx):
+ if not ctx.rerun_failures_count:
+ return None
+ return RerunProc(ctx.rerun_failures_count,
+ ctx.rerun_failures_max)
+
+
+
+if __name__ == '__main__':
+ sys.exit(StandardTestRunner().execute())
diff --git a/src/v8/tools/testrunner/testproc/__init__.py b/src/v8/tools/testrunner/testproc/__init__.py
new file mode 100644
index 0000000..4433538
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/__init__.py
@@ -0,0 +1,3 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
diff --git a/src/v8/tools/testrunner/testproc/base.py b/src/v8/tools/testrunner/testproc/base.py
new file mode 100644
index 0000000..1a87dbe
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/base.py
@@ -0,0 +1,207 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from .result import SKIPPED
+
+
+"""
+Pipeline
+
+Test processors are chained together and communicate with each other by
+calling previous/next processor in the chain.
+ ----next_test()----> ----next_test()---->
+Proc1 Proc2 Proc3
+ <---result_for()---- <---result_for()----
+
+For every next_test there is exactly one result_for call.
+If processor ignores the test it has to return SkippedResult.
+If it created multiple subtests for one test and wants to pass all of them to
+the previous processor it can enclose them in GroupedResult.
+
+
+Subtests
+
+When test processor needs to modify the test or create some variants of the
+test it creates subtests and sends them to the next processor.
+Each subtest has:
+- procid - globally unique id that should contain id of the parent test and
+ some suffix given by test processor, e.g. its name + subtest type.
+- processor - which created it
+- origin - pointer to the parent (sub)test
+"""
+
+
+DROP_RESULT = 0
+DROP_OUTPUT = 1
+DROP_PASS_OUTPUT = 2
+DROP_PASS_STDOUT = 3
+
+def get_reduce_result_function(requirement):
+ if requirement == DROP_RESULT:
+ return lambda _: None
+
+ if requirement == DROP_OUTPUT:
+ def f(result):
+ result.output = None
+ return result
+ return f
+
+ if requirement == DROP_PASS_OUTPUT:
+ def f(result):
+ if not result.has_unexpected_output:
+ result.output = None
+ return result
+ return f
+
+ if requirement == DROP_PASS_STDOUT:
+ def f(result):
+ if not result.has_unexpected_output:
+ result.output.stdout = None
+ result.output.stderr = None
+ return result
+ return f
+
+
+class TestProc(object):
+ def __init__(self):
+ self._prev_proc = None
+ self._next_proc = None
+ self._requirement = DROP_RESULT
+ self._prev_requirement = None
+ self._reduce_result = lambda result: result
+
+ def connect_to(self, next_proc):
+ """Puts `next_proc` after itself in the chain."""
+ next_proc._prev_proc = self
+ self._next_proc = next_proc
+
+ def remove_from_chain(self):
+ if self._prev_proc:
+ self._prev_proc._next_proc = self._next_proc
+ if self._next_proc:
+ self._next_proc._prev_proc = self._prev_proc
+
+ def setup(self, requirement=DROP_RESULT):
+ """
+ Method called by previous processor or processor pipeline creator to let
+ the processors know what part of the result can be ignored.
+ """
+ self._prev_requirement = requirement
+ if self._next_proc:
+ self._next_proc.setup(max(requirement, self._requirement))
+ if self._prev_requirement < self._requirement:
+ self._reduce_result = get_reduce_result_function(self._prev_requirement)
+
+ def next_test(self, test):
+ """
+ Method called by previous processor whenever it produces new test.
+ This method shouldn't be called by anyone except previous processor.
+ """
+ raise NotImplementedError()
+
+ def result_for(self, test, result):
+ """
+ Method called by next processor whenever it has result for some test.
+ This method shouldn't be called by anyone except next processor.
+ """
+ raise NotImplementedError()
+
+ def heartbeat(self):
+ if self._prev_proc:
+ self._prev_proc.heartbeat()
+
+ ### Communication
+
+ def _send_test(self, test):
+ """Helper method for sending test to the next processor."""
+ self._next_proc.next_test(test)
+
+ def _send_result(self, test, result):
+ """Helper method for sending result to the previous processor."""
+ result = self._reduce_result(result)
+ self._prev_proc.result_for(test, result)
+
+
+
+class TestProcObserver(TestProc):
+ """Processor used for observing the data."""
+ def __init__(self):
+ super(TestProcObserver, self).__init__()
+
+ def next_test(self, test):
+ self._on_next_test(test)
+ self._send_test(test)
+
+ def result_for(self, test, result):
+ self._on_result_for(test, result)
+ self._send_result(test, result)
+
+ def heartbeat(self):
+ self._on_heartbeat()
+ super(TestProcObserver, self).heartbeat()
+
+ def _on_next_test(self, test):
+ """Method called after receiving test from previous processor but before
+ sending it to the next one."""
+ pass
+
+ def _on_result_for(self, test, result):
+ """Method called after receiving result from next processor but before
+ sending it to the previous one."""
+ pass
+
+ def _on_heartbeat(self):
+ pass
+
+
+class TestProcProducer(TestProc):
+ """Processor for creating subtests."""
+
+ def __init__(self, name):
+ super(TestProcProducer, self).__init__()
+ self._name = name
+
+ def next_test(self, test):
+ self._next_test(test)
+
+ def result_for(self, subtest, result):
+ self._result_for(subtest.origin, subtest, result)
+
+ ### Implementation
+ def _next_test(self, test):
+ raise NotImplementedError()
+
+ def _result_for(self, test, subtest, result):
+ """
+ result_for method extended with `subtest` parameter.
+
+ Args
+ test: test used by current processor to create the subtest.
+ subtest: test for which the `result` is.
+ result: subtest execution result created by the output processor.
+ """
+ raise NotImplementedError()
+
+ ### Managing subtests
+ def _create_subtest(self, test, subtest_id, **kwargs):
+ """Creates subtest with subtest id <processor name>-`subtest_id`."""
+ return test.create_subtest(self, '%s-%s' % (self._name, subtest_id),
+ **kwargs)
+
+
+class TestProcFilter(TestProc):
+ """Processor for filtering tests."""
+
+ def next_test(self, test):
+ if self._filter(test):
+ self._send_result(test, SKIPPED)
+ else:
+ self._send_test(test)
+
+ def result_for(self, test, result):
+ self._send_result(test, result)
+
+ def _filter(self, test):
+ """Returns whether test should be filtered out."""
+ raise NotImplementedError()
diff --git a/src/v8/tools/testrunner/testproc/execution.py b/src/v8/tools/testrunner/testproc/execution.py
new file mode 100644
index 0000000..021b02a
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/execution.py
@@ -0,0 +1,92 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import collections
+import traceback
+
+from . import base
+from ..local import pool
+
+
+# Global function for multiprocessing, because pickling a static method doesn't
+# work on Windows.
+def run_job(job, process_context):
+ return job.run(process_context)
+
+
+def create_process_context(requirement):
+ return ProcessContext(base.get_reduce_result_function(requirement))
+
+
+JobResult = collections.namedtuple('JobResult', ['id', 'result'])
+ProcessContext = collections.namedtuple('ProcessContext', ['reduce_result_f'])
+
+
+class Job(object):
+ def __init__(self, test_id, cmd, outproc, keep_output):
+ self.test_id = test_id
+ self.cmd = cmd
+ self.outproc = outproc
+ self.keep_output = keep_output
+
+ def run(self, process_ctx):
+ output = self.cmd.execute()
+ result = self.outproc.process(output)
+ if not self.keep_output:
+ result = process_ctx.reduce_result_f(result)
+ return JobResult(self.test_id, result)
+
+
+class ExecutionProc(base.TestProc):
+ """Last processor in the chain. Instead of passing tests further it creates
+ commands and output processors, executes them in multiple worker processes and
+ sends results to the previous processor.
+ """
+
+ def __init__(self, jobs, context):
+ super(ExecutionProc, self).__init__()
+ self._pool = pool.Pool(jobs)
+ self._context = context
+ self._tests = {}
+
+ def connect_to(self, next_proc):
+ assert False, 'ExecutionProc cannot be connected to anything'
+
+ def start(self):
+ try:
+ it = self._pool.imap_unordered(
+ fn=run_job,
+ gen=[],
+ process_context_fn=create_process_context,
+ process_context_args=[self._prev_requirement],
+ )
+ for pool_result in it:
+ if pool_result.heartbeat:
+ continue
+
+ job_result = pool_result.value
+ test_id, result = job_result
+
+ test, result.cmd = self._tests[test_id]
+ del self._tests[test_id]
+ self._send_result(test, result)
+ except KeyboardInterrupt:
+ raise
+ except:
+ traceback.print_exc()
+ raise
+ finally:
+ self._pool.terminate()
+
+ def next_test(self, test):
+ test_id = test.procid
+ cmd = test.get_command(self._context)
+ self._tests[test_id] = test, cmd
+
+ # TODO(majeski): Needs factory for outproc as in local/execution.py
+ outproc = test.output_proc
+ self._pool.add([Job(test_id, cmd, outproc, test.keep_output)])
+
+ def result_for(self, test, result):
+ assert False, 'ExecutionProc cannot receive results'
diff --git a/src/v8/tools/testrunner/testproc/filter.py b/src/v8/tools/testrunner/testproc/filter.py
new file mode 100644
index 0000000..5081997
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/filter.py
@@ -0,0 +1,83 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from collections import defaultdict
+import fnmatch
+
+from . import base
+
+
+class StatusFileFilterProc(base.TestProcFilter):
+ """Filters tests by outcomes from status file.
+
+ Status file has to be loaded before using this function.
+
+ Args:
+ slow_tests_mode: What to do with slow tests.
+ pass_fail_tests_mode: What to do with pass or fail tests.
+
+ Mode options:
+ None (default): don't skip
+ "skip": skip if slow/pass_fail
+ "run": skip if not slow/pass_fail
+ """
+
+ def __init__(self, slow_tests_mode, pass_fail_tests_mode):
+ super(StatusFileFilterProc, self).__init__()
+ self._slow_tests_mode = slow_tests_mode
+ self._pass_fail_tests_mode = pass_fail_tests_mode
+
+ def _filter(self, test):
+ return (
+ test.do_skip or
+ self._skip_slow(test.is_slow) or
+ self._skip_pass_fail(test.is_pass_or_fail)
+ )
+
+ def _skip_slow(self, is_slow):
+ return (
+ (self._slow_tests_mode == 'run' and not is_slow) or
+ (self._slow_tests_mode == 'skip' and is_slow)
+ )
+
+ def _skip_pass_fail(self, is_pass_fail):
+ return (
+ (self._pass_fail_tests_mode == 'run' and not is_pass_fail) or
+ (self._pass_fail_tests_mode == 'skip' and is_pass_fail)
+ )
+
+
+class NameFilterProc(base.TestProcFilter):
+ """Filters tests based on command-line arguments.
+
+ args can be a glob: asterisks in any position of the name
+ represent zero or more characters. Without asterisks, only exact matches
+ will be used with the exeption of the test-suite name as argument.
+ """
+ def __init__(self, args):
+ super(NameFilterProc, self).__init__()
+
+ self._globs = defaultdict(list)
+ for a in args:
+ argpath = a.split('/')
+ suitename = argpath[0]
+ path = '/'.join(argpath[1:]) or '*'
+ self._globs[suitename].append(path)
+
+ for s, globs in self._globs.iteritems():
+ if not globs or '*' in globs:
+ self._globs[s] = []
+
+ def _filter(self, test):
+ globs = self._globs.get(test.suite.name)
+ if globs is None:
+ return True
+
+ if not globs:
+ return False
+
+ for g in globs:
+ if fnmatch.fnmatch(test.path, g):
+ return False
+ return True
diff --git a/src/v8/tools/testrunner/testproc/loader.py b/src/v8/tools/testrunner/testproc/loader.py
new file mode 100644
index 0000000..0a3d0df
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/loader.py
@@ -0,0 +1,27 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from . import base
+
+
+class LoadProc(base.TestProc):
+ """First processor in the chain that passes all tests to the next processor.
+ """
+
+ def load_tests(self, tests):
+ loaded = set()
+ for test in tests:
+ if test.procid in loaded:
+ print 'Warning: %s already obtained' % test.procid
+ continue
+
+ loaded.add(test.procid)
+ self._send_test(test)
+
+ def next_test(self, test):
+ assert False, 'Nothing can be connected to the LoadProc'
+
+ def result_for(self, test, result):
+ # Ignore all results.
+ pass
diff --git a/src/v8/tools/testrunner/testproc/progress.py b/src/v8/tools/testrunner/testproc/progress.py
new file mode 100644
index 0000000..78514f7
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/progress.py
@@ -0,0 +1,385 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import json
+import os
+import sys
+import time
+
+from . import base
+from ..local import junit_output
+
+
+def print_failure_header(test):
+ if test.output_proc.negative:
+ negative_marker = '[negative] '
+ else:
+ negative_marker = ''
+ print "=== %(label)s %(negative)s===" % {
+ 'label': test,
+ 'negative': negative_marker,
+ }
+
+
+class TestsCounter(base.TestProcObserver):
+ def __init__(self):
+ super(TestsCounter, self).__init__()
+ self.total = 0
+
+ def _on_next_test(self, test):
+ self.total += 1
+
+
+class ResultsTracker(base.TestProcObserver):
+ def __init__(self):
+ super(ResultsTracker, self).__init__()
+ self._requirement = base.DROP_OUTPUT
+
+ self.failed = 0
+ self.remaining = 0
+ self.total = 0
+
+ def _on_next_test(self, test):
+ self.total += 1
+ self.remaining += 1
+
+ def _on_result_for(self, test, result):
+ self.remaining -= 1
+ if result.has_unexpected_output:
+ self.failed += 1
+
+
+class ProgressIndicator(base.TestProcObserver):
+ def finished(self):
+ pass
+
+
+class SimpleProgressIndicator(ProgressIndicator):
+ def __init__(self):
+ super(SimpleProgressIndicator, self).__init__()
+ self._requirement = base.DROP_PASS_OUTPUT
+
+ self._failed = []
+ self._total = 0
+
+ def _on_next_test(self, test):
+ self._total += 1
+
+ def _on_result_for(self, test, result):
+ # TODO(majeski): Support for dummy/grouped results
+ if result.has_unexpected_output:
+ self._failed.append((test, result))
+
+ def finished(self):
+ crashed = 0
+ print
+ for test, result in self._failed:
+ print_failure_header(test)
+ if result.output.stderr:
+ print "--- stderr ---"
+ print result.output.stderr.strip()
+ if result.output.stdout:
+ print "--- stdout ---"
+ print result.output.stdout.strip()
+ print "Command: %s" % result.cmd.to_string()
+ if result.output.HasCrashed():
+ print "exit code: %d" % result.output.exit_code
+ print "--- CRASHED ---"
+ crashed += 1
+ if result.output.HasTimedOut():
+ print "--- TIMEOUT ---"
+ if len(self._failed) == 0:
+ print "==="
+ print "=== All tests succeeded"
+ print "==="
+ else:
+ print
+ print "==="
+ print "=== %i tests failed" % len(self._failed)
+ if crashed > 0:
+ print "=== %i tests CRASHED" % crashed
+ print "==="
+
+
+class VerboseProgressIndicator(SimpleProgressIndicator):
+ def _on_result_for(self, test, result):
+ super(VerboseProgressIndicator, self)._on_result_for(test, result)
+ # TODO(majeski): Support for dummy/grouped results
+ if result.has_unexpected_output:
+ if result.output.HasCrashed():
+ outcome = 'CRASH'
+ else:
+ outcome = 'FAIL'
+ else:
+ outcome = 'pass'
+ print 'Done running %s: %s' % (test, outcome)
+ sys.stdout.flush()
+
+ def _on_heartbeat(self):
+ print 'Still working...'
+ sys.stdout.flush()
+
+
+class DotsProgressIndicator(SimpleProgressIndicator):
+ def __init__(self):
+ super(DotsProgressIndicator, self).__init__()
+ self._count = 0
+
+ def _on_result_for(self, test, result):
+ # TODO(majeski): Support for dummy/grouped results
+ self._count += 1
+ if self._count > 1 and self._count % 50 == 1:
+ sys.stdout.write('\n')
+ if result.has_unexpected_output:
+ if result.output.HasCrashed():
+ sys.stdout.write('C')
+ sys.stdout.flush()
+ elif result.output.HasTimedOut():
+ sys.stdout.write('T')
+ sys.stdout.flush()
+ else:
+ sys.stdout.write('F')
+ sys.stdout.flush()
+ else:
+ sys.stdout.write('.')
+ sys.stdout.flush()
+
+
+class CompactProgressIndicator(ProgressIndicator):
+ def __init__(self, templates):
+ super(CompactProgressIndicator, self).__init__()
+ self._requirement = base.DROP_PASS_OUTPUT
+
+ self._templates = templates
+ self._last_status_length = 0
+ self._start_time = time.time()
+
+ self._total = 0
+ self._passed = 0
+ self._failed = 0
+
+ def _on_next_test(self, test):
+ self._total += 1
+
+ def _on_result_for(self, test, result):
+ # TODO(majeski): Support for dummy/grouped results
+ if result.has_unexpected_output:
+ self._failed += 1
+ else:
+ self._passed += 1
+
+ self._print_progress(str(test))
+ if result.has_unexpected_output:
+ output = result.output
+ stdout = output.stdout.strip()
+ stderr = output.stderr.strip()
+
+ self._clear_line(self._last_status_length)
+ print_failure_header(test)
+ if len(stdout):
+ print self._templates['stdout'] % stdout
+ if len(stderr):
+ print self._templates['stderr'] % stderr
+ print "Command: %s" % result.cmd
+ if output.HasCrashed():
+ print "exit code: %d" % output.exit_code
+ print "--- CRASHED ---"
+ if output.HasTimedOut():
+ print "--- TIMEOUT ---"
+
+ def finished(self):
+ self._print_progress('Done')
+ print
+
+ def _print_progress(self, name):
+ self._clear_line(self._last_status_length)
+ elapsed = time.time() - self._start_time
+ if not self._total:
+ progress = 0
+ else:
+ progress = (self._passed + self._failed) * 100 // self._total
+ status = self._templates['status_line'] % {
+ 'passed': self._passed,
+ 'progress': progress,
+ 'failed': self._failed,
+ 'test': name,
+ 'mins': int(elapsed) / 60,
+ 'secs': int(elapsed) % 60
+ }
+ status = self._truncate(status, 78)
+ self._last_status_length = len(status)
+ print status,
+ sys.stdout.flush()
+
+ def _truncate(self, string, length):
+ if length and len(string) > (length - 3):
+ return string[:(length - 3)] + "..."
+ else:
+ return string
+
+ def _clear_line(self, last_length):
+ raise NotImplementedError()
+
+
+class ColorProgressIndicator(CompactProgressIndicator):
+ def __init__(self):
+ templates = {
+ 'status_line': ("[%(mins)02i:%(secs)02i|"
+ "\033[34m%%%(progress) 4d\033[0m|"
+ "\033[32m+%(passed) 4d\033[0m|"
+ "\033[31m-%(failed) 4d\033[0m]: %(test)s"),
+ 'stdout': "\033[1m%s\033[0m",
+ 'stderr': "\033[31m%s\033[0m",
+ }
+ super(ColorProgressIndicator, self).__init__(templates)
+
+ def _clear_line(self, last_length):
+ print "\033[1K\r",
+
+
+class MonochromeProgressIndicator(CompactProgressIndicator):
+ def __init__(self):
+ templates = {
+ 'status_line': ("[%(mins)02i:%(secs)02i|%%%(progress) 4d|"
+ "+%(passed) 4d|-%(failed) 4d]: %(test)s"),
+ 'stdout': '%s',
+ 'stderr': '%s',
+ }
+ super(MonochromeProgressIndicator, self).__init__(templates)
+
+ def _clear_line(self, last_length):
+ print ("\r" + (" " * last_length) + "\r"),
+
+
+class JUnitTestProgressIndicator(ProgressIndicator):
+ def __init__(self, junitout, junittestsuite):
+ super(JUnitTestProgressIndicator, self).__init__()
+ self._requirement = base.DROP_PASS_STDOUT
+
+ self.outputter = junit_output.JUnitTestOutput(junittestsuite)
+ if junitout:
+ self.outfile = open(junitout, "w")
+ else:
+ self.outfile = sys.stdout
+
+ def _on_result_for(self, test, result):
+ # TODO(majeski): Support for dummy/grouped results
+ fail_text = ""
+ output = result.output
+ if result.has_unexpected_output:
+ stdout = output.stdout.strip()
+ if len(stdout):
+ fail_text += "stdout:\n%s\n" % stdout
+ stderr = output.stderr.strip()
+ if len(stderr):
+ fail_text += "stderr:\n%s\n" % stderr
+ fail_text += "Command: %s" % result.cmd.to_string()
+ if output.HasCrashed():
+ fail_text += "exit code: %d\n--- CRASHED ---" % output.exit_code
+ if output.HasTimedOut():
+ fail_text += "--- TIMEOUT ---"
+ self.outputter.HasRunTest(
+ test_name=str(test),
+ test_cmd=result.cmd.to_string(relative=True),
+ test_duration=output.duration,
+ test_failure=fail_text)
+
+ def finished(self):
+ self.outputter.FinishAndWrite(self.outfile)
+ if self.outfile != sys.stdout:
+ self.outfile.close()
+
+
+class JsonTestProgressIndicator(ProgressIndicator):
+ def __init__(self, json_test_results, arch, mode, random_seed):
+ super(JsonTestProgressIndicator, self).__init__()
+ # We want to drop stdout/err for all passed tests on the first try, but we
+ # need to get outputs for all runs after the first one. To accommodate that,
+ # reruns are set to keep the result no matter what requirement says, i.e.
+ # keep_output set to True in the RerunProc.
+ self._requirement = base.DROP_PASS_STDOUT
+
+ self.json_test_results = json_test_results
+ self.arch = arch
+ self.mode = mode
+ self.random_seed = random_seed
+ self.results = []
+ self.tests = []
+
+ def _on_result_for(self, test, result):
+ if result.is_rerun:
+ self.process_results(test, result.results)
+ else:
+ self.process_results(test, [result])
+
+ def process_results(self, test, results):
+ for run, result in enumerate(results):
+ # TODO(majeski): Support for dummy/grouped results
+ output = result.output
+ # Buffer all tests for sorting the durations in the end.
+ # TODO(machenbach): Running average + buffer only slowest 20 tests.
+ self.tests.append((test, output.duration, result.cmd))
+
+ # Omit tests that run as expected on the first try.
+ # Everything that happens after the first run is included in the output
+ # even if it flakily passes.
+ if not result.has_unexpected_output and run == 0:
+ continue
+
+ self.results.append({
+ "name": str(test),
+ "flags": result.cmd.args,
+ "command": result.cmd.to_string(relative=True),
+ "run": run + 1,
+ "stdout": output.stdout,
+ "stderr": output.stderr,
+ "exit_code": output.exit_code,
+ "result": test.output_proc.get_outcome(output),
+ "expected": test.expected_outcomes,
+ "duration": output.duration,
+
+ # TODO(machenbach): This stores only the global random seed from the
+ # context and not possible overrides when using random-seed stress.
+ "random_seed": self.random_seed,
+ "target_name": test.get_shell(),
+ "variant": test.variant,
+ })
+
+ def finished(self):
+ complete_results = []
+ if os.path.exists(self.json_test_results):
+ with open(self.json_test_results, "r") as f:
+ # Buildbot might start out with an empty file.
+ complete_results = json.loads(f.read() or "[]")
+
+ duration_mean = None
+ if self.tests:
+ # Get duration mean.
+ duration_mean = (
+ sum(duration for (_, duration, cmd) in self.tests) /
+ float(len(self.tests)))
+
+ # Sort tests by duration.
+ self.tests.sort(key=lambda (_, duration, cmd): duration, reverse=True)
+ slowest_tests = [
+ {
+ "name": str(test),
+ "flags": cmd.args,
+ "command": cmd.to_string(relative=True),
+ "duration": duration,
+ "marked_slow": test.is_slow,
+ } for (test, duration, cmd) in self.tests[:20]
+ ]
+
+ complete_results.append({
+ "arch": self.arch,
+ "mode": self.mode,
+ "results": self.results,
+ "slowest_tests": slowest_tests,
+ "duration_mean": duration_mean,
+ "test_total": len(self.tests),
+ })
+
+ with open(self.json_test_results, "w") as f:
+ f.write(json.dumps(complete_results))
diff --git a/src/v8/tools/testrunner/testproc/rerun.py b/src/v8/tools/testrunner/testproc/rerun.py
new file mode 100644
index 0000000..7f96e02
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/rerun.py
@@ -0,0 +1,59 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import collections
+
+from . import base
+from .result import RerunResult
+
+
+class RerunProc(base.TestProcProducer):
+ def __init__(self, rerun_max, rerun_max_total=None):
+ super(RerunProc, self).__init__('Rerun')
+ self._requirement = base.DROP_OUTPUT
+
+ self._rerun = {}
+ self._results = collections.defaultdict(list)
+ self._rerun_max = rerun_max
+ self._rerun_total_left = rerun_max_total
+
+ def _next_test(self, test):
+ self._send_next_subtest(test)
+
+ def _result_for(self, test, subtest, result):
+ # First result
+ if subtest.procid[-2:] == '-1':
+ # Passed, no reruns
+ if not result.has_unexpected_output:
+ self._send_result(test, result)
+ return
+
+ self._rerun[test.procid] = 0
+
+ results = self._results[test.procid]
+ results.append(result)
+
+ if self._needs_rerun(test, result):
+ self._rerun[test.procid] += 1
+ if self._rerun_total_left is not None:
+ self._rerun_total_left -= 1
+ self._send_next_subtest(test, self._rerun[test.procid])
+ else:
+ result = RerunResult.create(results)
+ self._finalize_test(test)
+ self._send_result(test, result)
+
+ def _needs_rerun(self, test, result):
+ # TODO(majeski): Limit reruns count for slow tests.
+ return ((self._rerun_total_left is None or self._rerun_total_left > 0) and
+ self._rerun[test.procid] < self._rerun_max and
+ result.has_unexpected_output)
+
+ def _send_next_subtest(self, test, run=0):
+ subtest = self._create_subtest(test, str(run + 1), keep_output=(run != 0))
+ self._send_test(subtest)
+
+ def _finalize_test(self, test):
+ del self._rerun[test.procid]
+ del self._results[test.procid]
diff --git a/src/v8/tools/testrunner/testproc/result.py b/src/v8/tools/testrunner/testproc/result.py
new file mode 100644
index 0000000..c817fc0
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/result.py
@@ -0,0 +1,97 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+
+class ResultBase(object):
+ @property
+ def is_skipped(self):
+ return False
+
+ @property
+ def is_grouped(self):
+ return False
+
+ @property
+ def is_rerun(self):
+ return False
+
+
+class Result(ResultBase):
+ """Result created by the output processor."""
+
+ def __init__(self, has_unexpected_output, output, cmd=None):
+ self.has_unexpected_output = has_unexpected_output
+ self.output = output
+ self.cmd = cmd
+
+
+class GroupedResult(ResultBase):
+ """Result consisting of multiple results. It can be used by processors that
+ create multiple subtests for each test and want to pass all results back.
+ """
+
+ @staticmethod
+ def create(results):
+ """Create grouped result from the list of results. It filters out skipped
+ results. If all results are skipped results it returns skipped result.
+
+ Args:
+ results: list of pairs (test, result)
+ """
+ results = [(t, r) for (t, r) in results if not r.is_skipped]
+ if not results:
+ return SKIPPED
+ return GroupedResult(results)
+
+ def __init__(self, results):
+ self.results = results
+
+ @property
+ def is_grouped(self):
+ return True
+
+
+class SkippedResult(ResultBase):
+ """Result without any meaningful value. Used primarily to inform the test
+ processor that it's test wasn't executed.
+ """
+
+ @property
+ def is_skipped(self):
+ return True
+
+
+SKIPPED = SkippedResult()
+
+
+class RerunResult(Result):
+ """Result generated from several reruns of the same test. It's a subclass of
+ Result since the result of rerun is result of the last run. In addition to
+ normal result it contains results of all reruns.
+ """
+ @staticmethod
+ def create(results):
+ """Create RerunResult based on list of results. List cannot be empty. If it
+ has only one element it's returned as a result.
+ """
+ assert results
+
+ if len(results) == 1:
+ return results[0]
+ return RerunResult(results)
+
+ def __init__(self, results):
+ """Has unexpected output and the output itself of the RerunResult equals to
+ the last result in the passed list.
+ """
+ assert results
+
+ last = results[-1]
+ super(RerunResult, self).__init__(last.has_unexpected_output, last.output,
+ last.cmd)
+ self.results = results
+
+ @property
+ def is_rerun(self):
+ return True
diff --git a/src/v8/tools/testrunner/testproc/shard.py b/src/v8/tools/testrunner/testproc/shard.py
new file mode 100644
index 0000000..1caac9f
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/shard.py
@@ -0,0 +1,30 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from . import base
+
+
+class ShardProc(base.TestProcFilter):
+ """Processor distributing tests between shards.
+ It simply passes every n-th test. To be deterministic it has to be placed
+ before all processors that generate tests dynamically.
+ """
+ def __init__(self, myid, shards_count):
+ """
+ Args:
+ myid: id of the shard within [0; shards_count - 1]
+ shards_count: number of shards
+ """
+ super(ShardProc, self).__init__()
+
+ assert myid >= 0 and myid < shards_count
+
+ self._myid = myid
+ self._shards_count = shards_count
+ self._last = 0
+
+ def _filter(self, test):
+ res = self._last != self._myid
+ self._last = (self._last + 1) % self._shards_count
+ return res
diff --git a/src/v8/tools/testrunner/testproc/variant.py b/src/v8/tools/testrunner/testproc/variant.py
new file mode 100644
index 0000000..dba1af9
--- /dev/null
+++ b/src/v8/tools/testrunner/testproc/variant.py
@@ -0,0 +1,68 @@
+# Copyright 2018 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from . import base
+from ..local.variants import ALL_VARIANTS, ALL_VARIANT_FLAGS
+from .result import GroupedResult
+
+
+STANDARD_VARIANT = set(["default"])
+
+
+class VariantProc(base.TestProcProducer):
+ """Processor creating variants.
+
+ For each test it keeps generator that returns variant, flags and id suffix.
+ It produces variants one at a time, so it's waiting for the result of one
+ variant to create another variant of the same test.
+ It maintains the order of the variants passed to the init.
+
+ There are some cases when particular variant of the test is not valid. To
+ ignore subtests like that, StatusFileFilterProc should be placed somewhere
+ after the VariantProc.
+ """
+
+ def __init__(self, variants):
+ super(VariantProc, self).__init__('VariantProc')
+ self._next_variant = {}
+ self._variant_gens = {}
+ self._variants = variants
+
+ def setup(self, requirement=base.DROP_RESULT):
+ super(VariantProc, self).setup(requirement)
+
+ # VariantProc is optimized for dropping the result and it should be placed
+ # in the chain where it's possible.
+ assert requirement == base.DROP_RESULT
+
+ def _next_test(self, test):
+ gen = self._variants_gen(test)
+ self._next_variant[test.procid] = gen
+ self._try_send_new_subtest(test, gen)
+
+ def _result_for(self, test, subtest, result):
+ gen = self._next_variant[test.procid]
+ self._try_send_new_subtest(test, gen)
+
+ def _try_send_new_subtest(self, test, variants_gen):
+ for variant, flags, suffix in variants_gen:
+ subtest = self._create_subtest(test, '%s-%s' % (variant, suffix),
+ variant=variant, flags=flags)
+ self._send_test(subtest)
+ return
+
+ del self._next_variant[test.procid]
+ self._send_result(test, None)
+
+ def _variants_gen(self, test):
+ """Generator producing (variant, flags, procid suffix) tuples."""
+ return self._get_variants_gen(test).gen(test)
+
+ def _get_variants_gen(self, test):
+ key = test.suite.name
+ variants_gen = self._variant_gens.get(key)
+ if not variants_gen:
+ variants_gen = test.suite.get_variants_gen(self._variants)
+ self._variant_gens[key] = variants_gen
+ return variants_gen
diff --git a/src/v8/tools/testrunner/testrunner.isolate b/src/v8/tools/testrunner/testrunner.isolate
index e29f1df..56667c2 100644
--- a/src/v8/tools/testrunner/testrunner.isolate
+++ b/src/v8/tools/testrunner/testrunner.isolate
@@ -7,6 +7,7 @@
'../run-tests.py',
],
'files': [
+ '<(PRODUCT_DIR)/v8_build_config.json',
'../run-tests.py',
'./'
],
@@ -20,12 +21,5 @@
],
},
}],
- ['is_gn==1', {
- 'variables': {
- 'files': [
- '<(PRODUCT_DIR)/v8_build_config.json',
- ],
- },
- }],
],
}
diff --git a/src/v8/tools/testrunner/utils/dump_build_config.py b/src/v8/tools/testrunner/utils/dump_build_config.py
index bd57b5f..b691bb3 100644
--- a/src/v8/tools/testrunner/utils/dump_build_config.py
+++ b/src/v8/tools/testrunner/utils/dump_build_config.py
@@ -15,7 +15,7 @@
import os
import sys
-assert len(sys.argv) > 1
+assert len(sys.argv) > 2
def as_json(kv):
assert '=' in kv
@@ -23,4 +23,4 @@
return k, json.loads(v)
with open(sys.argv[1], 'w') as f:
- json.dump(dict(as_json(kv) for kv in sys.argv[2:]), f)
+ json.dump(dict(map(as_json, sys.argv[2:])), f)
diff --git a/src/v8/tools/testrunner/utils/dump_build_config_gyp.py b/src/v8/tools/testrunner/utils/dump_build_config_gyp.py
new file mode 100644
index 0000000..7f72627
--- /dev/null
+++ b/src/v8/tools/testrunner/utils/dump_build_config_gyp.py
@@ -0,0 +1,54 @@
+# Copyright 2017 the V8 project authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+"""The same as dump_build_config.py but for gyp legacy.
+
+Expected to be called like:
+dump_build_config.py path/to/file.json [key1=value1 ...]
+
+Raw gyp values are supported - they will be tranformed into valid json.
+"""
+# TODO(machenbach): Remove this when gyp is deprecated.
+
+import json
+import os
+import sys
+
+assert len(sys.argv) > 2
+
+
+GYP_GN_CONVERSION = {
+ 'is_component_build': {
+ 'shared_library': 'true',
+ 'static_library': 'false',
+ },
+ 'is_debug': {
+ 'Debug': 'true',
+ 'Release': 'false',
+ },
+}
+
+DEFAULT_CONVERSION ={
+ '0': 'false',
+ '1': 'true',
+ 'ia32': 'x86',
+}
+
+def gyp_to_gn(key, value):
+ value = GYP_GN_CONVERSION.get(key, DEFAULT_CONVERSION).get(value, value)
+ value = value if value in ['true', 'false'] else '"{0}"'.format(value)
+ return value
+
+def as_json(kv):
+ assert '=' in kv
+ k, v = kv.split('=', 1)
+ v2 = gyp_to_gn(k, v)
+ try:
+ return k, json.loads(v2)
+ except ValueError as e:
+ print(k, v, v2)
+ raise e
+
+with open(sys.argv[1], 'w') as f:
+ json.dump(dict(map(as_json, sys.argv[2:])), f)