Features

Six tools, one workflow.

AI Gap Analysis for going faster. Assessment workspace for doing. Team chat for coordinating. Rating-rule auto-detection to catch mistakes. One-click PDF & PowerPoint export for sharing. Explorer for learning. Each stands on its own — together they reshape how Automotive SPICE® assessments are run.

Feature 1 · flagship

AI Gap Analysis

Upload procedures, plans, and work instructions. Assessoris AI reads every page, evaluates every Base Practice against the evidence, and drafts findings with document citations you can verify.

  • ✓ Native PDF parsing — server-side DOCX / TXT fallback
  • ✓ Every finding cites the specific document and section
  • ✓ Weaknesses flagged explicitly — no hedging
  • ✓ AI-origin findings visually distinct (purple badge)
  • ✓ You own the final assessment — nothing auto-imports
AI GAP ANALYSIS · SUP.1
4 documents analysed · 12 findings
1
SUP.1.BP1
Ensure independence of QA
[AI-C-041] AI Procedure §4 assigns QA to the Advanced Quality Engineer reporting to the Regional Director — independence satisfied.
7
SUP.1.BP7
Escalate non-conformances
[AI-W-005] AI Escalation mentioned but triggers, thresholds, and management levels not specified. Referenced procedure not part of this evidence set.
Feature 2

Collaborative assessment workspace

N/P/L/F ratings aggregate live into PA and capability-level indicators. Findings attach to any BP or GP. Everyone sees everything in real time.

  • ✓ Live N/P/L/F ratings on every BP and GP
  • ✓ Four finding types: Comment · Weakness · Strength · Question
  • ✓ Evidence management with @-mention references in findings
  • ✓ Automatic capability-level calculator (L1 / L2 / L3)
  • ✓ Real-time sync so every assessor stays up to date
ASSESSMENT · SWE.1
SWE.1 · SOFTWARE REQUIREMENTS ANALYSIS
2
SWE.1.BP2
Analyse software requirements
N P L F
[W-007] Traceability to system architecture incomplete — 3 elements unlinked.
[C-012] Priority scheme documented in @EV-003. Criteria stakeholder-agreed.
3
SWE.1.BP3
Analyse impact on the operating environment
N P L F
Feature 3

Team chat — per assessment

Every assessment has its own messenger-style chat. Talk through a finding, clarify an evidence reference, flag a question for the lead assessor — without switching to email, Slack, or Teams and losing the context you were just reading.

  • ✓ Scoped per assessment — no cross-talk across projects
  • ✓ Unread message counts per assessment
  • ✓ @-mention evidences directly in chat
  • ✓ Same real-time sync as the rest of the app
CHAT · "CAR-X AUDIT 2026"
DK
Is SYS.2.BP2's traceability evidenced in the new release plan?
Darius · 09:24
RS
Yes — linked @EV-014. Mapping in §5.3. Should be L.
Rafael · 09:26
DK
Perfect. Flag AI-W-012 for discussion in kickoff.
Darius · 09:27
Feature 4

Rating-rule violation detection

Every Automotive SPICE® rating rule is tracked, cross-linked, and auto-checked. The moment ratings conflict with a rule, a warning fires on the exact BP — with the rule text, the conflicting ratings, and a one-click override if you disagree. No more spreadsheet gymnastics.

  • ✓ Full rating rules reference with cross-references between rules
  • ✓ Warnings fire in real time as ratings change
  • ✓ Each warning links back to the rule text and the affected BPs and GPs
  • ✓ Assessor manual override with audit trail
  • ✓ Global "all warnings" view — scan risks across the whole assessment
RATING RULES · LIVE WARNINGS
PA2.1.RL.9 · SHALL DOWNRATE
If the objectives or the strategy for the performance of the process (GP 2.1.1) is downrated, then GP 2.1.2 shall be downrated.
⚠ TRIGGERED
GP 2.1.1 rated L but GP 2.1.2 still rated F. Downrate GP 2.1.2 or override with justification.
PA2.2.RL.7 · SHALL NOT RATE HIGHER
If GP 2.2.2 is downrated, then GP 2.2.3 shall not be rated higher than GP 2.2.2.
⚠ CHECK
GP 2.2.2 = P → GP 2.2.3 cap is P (currently L). Review before closing.
SWE.1.RL.3 · SHALL NOT DOWNRATE
If software requirements are not derived from system requirements but from stakeholder requirements which do not affect system requirements or the system architecture, and this is agreed with software and system representatives, then SWE.1.BP1 shall not be downrated.
INFO
Feature 5

One-click PDF & PowerPoint export

When the assessment is done, the report is done. One click turns every BP rating, finding and capability-level summary into a polished PDF for the audit file — or a ready-to-present PowerPoint deck for the closing meeting. No re-typing, no copy-paste from spreadsheets.

  • ✓ Per-process and whole-assessment reports
  • ✓ Brand-consistent layout — cover page, executive summary, per-BP detail
  • ✓ Findings grouped by type (Strength · Weakness · Comment · Question)
  • ✓ Capability-level chart with PA aggregates auto-generated
  • ✓ PowerPoint export drops each process on its own slide for the closing meeting
  • ✓ Share results with stakeholders in the exact format they expect
EXPORT · CAR-X AUDIT 2026
READY TO EXPORT · 9 processes · 41 findings
PDF REPORT
Full audit report · cover · exec summary · per-BP detail · capability chart
⬇ EXPORT
POWERPOINT DECK
One slide per process · ready to present in the closing meeting
⬇ EXPORT
CSV DATA
Raw ratings, findings and evidences — one row per item
⬇ EXPORT
Feature 6

Automotive SPICE® Explorer

A live, searchable pocket guide to the entire PAM 4.0. Every process group, every BP, every GP, every process attribute, every related note — right at your fingertips. Personal notes and favourites on every item.

  • ✓ All 38 processes across 12 groups incl. SEC & MLE
  • ✓ All BPs, GPs, process attributes and related notes
  • ✓ Personal notes & favourites on every item
  • ✓ Fast full-text search across BPs, GPs, and notes
  • ✓ Works offline once the page is loaded — true pocket guide
SWE — SOFTWARE ENGINEERING
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SWE.3 Software Detailed Design
SWE.4 Software Unit Verification
SWE.5 Software Component Verification
SWE.6 Software Verification
SEC — CYBERSECURITY ENGINEERING
SEC.1 Cybersecurity Requirements Elicitation
SEC.2 Cybersecurity Implementation
SEC.3 Risk Treatment Verification
SEC.4 Risk Treatment Validation
MLE — MACHINE LEARNING ENGINEERING
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
MLE.3 Machine Learning Training
MLE.4 Machine Learning Model Testing

See it on your own documents.

The fastest way to understand Assessoris is to watch the AI read your own procedures. Book a demo and bring a sample.

Request a demo