_test
suffix, and every test method must start with the _test
prefix. Take a look at the add group tests for reference.<module-path>/coverage
. For example, api/coverage.@mocks
decorator allows us to populate the database with test data using AWS parameters. A clean database will be created and populated for each parameter provided via @utils.parametrize
. We got deeper into these decorators and helper methods in the next sections.IntegratesAws.dynamodb
in the @mocks
decorator. This parameter is an instance of IntegratesDynamodb
, a helper class to populate the main tables with valid data:@mocks(
aws=IntegratesAws(
dynamodb=IntegratesDynamodb(
organizations=[OrganizationFaker(id=ORG_ID)],
stakeholders=[
StakeholderFaker(email=ORGANIZATION_MANAGER_EMAIL),
StakeholderFaker(email=ADMIN_EMAIL),
],
organization_access=[
OrganizationAccessFaker(
organization_id=ORG_ID,
email=ORGANIZATION_MANAGER_EMAIL,
state=OrganizationAccessStateFaker(has_access= True, role="organization_manager"),
),
OrganizationAccessFaker(
organization_id=ORG_ID,
email=ADMIN_EMAIL,
state=OrganizationAccessStateFaker(has_access=True, role="admin"),
),
],
),
),
others=[
Mock(logs_utils, "cloudwatch_log", "sync", None),
],
)
OrganizationAccessStateFaker
or the email in StakeholderFaker
). A fake name gives a hint about where it should be used in the IntegratesDynamodb
parameters.testing.fakers
module and implement a new parameter for the faker on testing.aws.dynamodb
module.others
parameter is a way to list all the startup mocks that you require in your test. In the example above, we are mocking the cloudwatch_log
function from logs_utils
module to avoid calling CloudWatch directly and always return a None
value. Mock
is a helper class that creates a mock based on module, function, or variable name, mode (sync or async), and a return value.@mocks
is called, and no more actions are required. You can use the buckets in your tests and also load files to buckets automatically before every test run.@mocks(aws=IntegratesAws(s3=IntegratesS3(autoload=True)))
<test_name>
directory is searched to load files into the corresponding buckets. For example, a README.md
file will be loaded into integrates.dev
for test_name_2
, and a different README.md
file will be loaded into Integrates for test_name_3
. This approach ensures both isolation and simplicity in the tests.@parametrize
to include several test cases:from integrates.testing.utils import parametrize
@parametrize(
args=["arg", "expected"],
cases=[
["a", "A"],
["b", "B"],
["c", "C"],
],
)
def test_capitalize(arg: str, expected: str) -> None:
...
raises
to handle errors during tests:from integrates.testing.utils import raises
def test_fail() -> None:
with raises(ValueError):
...
get_file_abs_path
to get the fileβs absolute path in the test_data/<test_name>
directory:from integrates.testing.utils import get_file_abs_path
def test_name_1() -> None:
abs_path = get_file_abs_path("file_1.txt"):
assert "/test_data/test_name_1/file_1.txt" in abs_path # True
@freeze_time
when you want to set the execution time (time-based features).from integrates.testing.utils import freeze_time
@freeze_time("2024-01-01")
def test_accepted_until() -> None
...
integrates-back-test <module> [test-1] [test-2] [test-n]...
<module>
is required and can be any Integrates module.[test-n]
is optional and can be any test within that module.F5
(while the cursor is within a file from integrates
workspace) and select Debug integrates tests (specific module)
from the dropdown. Before launching, the IDE will ask you to select the module you want to test. After this, a debugging console will be prepared with the flakes development environment, which may take a while and might even imply a failure in the first attempt. With a successful launch, you will be able to set breakpoints and inspect the code as you would do with any other debugging session.