Skip to content

Conversation

@dheerajoruganty
Copy link
Contributor

Amazon Bedrock AgentCore Samples Pull Request

Important

  1. We strictly follow a issue-first approach, please first open an issue relating to this Pull Request.
  2. Once this Pull Request is ready for review please attach review ready label to it. Only PRs with review ready will be reviewed.

Issue number:

Concise description of the PR

Changes to ..., because ...

User experience

Please share what the user experience looks like before and after this change

Checklist

If your change doesn't seem to apply, please leave them unchecked.

  • I have reviewed the contributing guidelines
  • Add your name to CONTRIBUTORS.md
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • Are you uploading a dataset?
  • Have you documented Introduction, Architecture Diagram, Prerequisites, Usage, Sample Prompts, and Clean Up steps in your example README?
  • I agree to resolve any issues created for this example in the future.
  • I have performed a self-review of this change
  • Changes have been tested
  • Changes are documented

Acknowledgment

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the project license.

dheerajoruganty and others added 30 commits August 26, 2025 22:20
Based on official ASH documentation, the correct approach is:
1. Install ASH using pip (not uv tool)
2. Run ash command directly (not via uv tool run)

This follows the GitHub Actions example from the ASH documentation.
- Install bandit, semgrep, detect-secrets, checkov via pip
- This ensures scanners run instead of being SKIPPED due to missing uv
- Renamed problematic code files to trigger fresh workflow runs
- Test updated ASH workflow with manual scanner dependency installation
- Should now run all scanners instead of skipping due to missing UV
- Remove all emojis from workflow messages for cleaner output
- Install additional scanners (cfn-nag, grype, syft) to reduce MISSING status
- Filter out redundant 'Time since scan' from reports
- Improve scanner coverage and reduce noise in results
- Change back to updating existing comment instead of creating new ones
- Add commit SHA to comment for tracking
- Remove both 'Time since scan' and 'Report generated' lines
- Set continue-on-error: false so workflow fails when ASH finds issues
- This provides better UX with single updated comment per PR
- Set continue-on-error: true for ASH scan so workflow continues to comment step
- Move the failure logic to the final step after commenting
- This ensures PR comments are always posted, then workflow fails appropriately
- Better UX: users see security results even when workflow fails
- Use grep with extended regex to remove redundant timestamps
- Filter out both 'Time since scan' and footer attribution lines
- Should produce cleaner comments without timing noise
- Capture terminal output which correctly shows FAILED status
- Extract ASCII table from terminal (shows FAILED not SKIPPED)
- Combine with markdown report for detailed findings
- Removes all redundant timestamps and metadata
- Provides accurate scanner status in PR comments
- Use line numbers to extract exact table section
- Add fallback if markers not found
- Handle special characters in terminal output properly
- Prevent sed errors from failing the workflow
- Ensure terminal output file is created with proper error handling
- Add debugging to track table extraction
- Use head -1 to get first match only
- Check TABLE_END > TABLE_START before extraction
- Add diagnostics to troubleshoot extraction issues
- Use line numbers for robust extraction of hotspots and findings
- Add || true to prevent sed/grep failures from breaking workflow
- Remove debug output since table extraction is working
- Use tail to extract from findings to end of file
- Successfully shows FAILED status in scanner table
- Use terminal output for correct scanner status table (FAILED not SKIPPED)
- Use markdown report for detailed findings and hotspots
- Skip broken metadata and scanner table from markdown
- Best of both worlds: accurate status + comprehensive findings
- Remove complex terminal output capture and extraction logic
- Use markdown report directly with timestamp filtering
- Keep scanner dependency installations to prevent SKIPPED status
- Maintain single comment updating functionality
- Use 'ash --mode container' instead of local mode
- Remove complex scanner installation steps
- Container comes with all scanners pre-installed
- Simplify paths to use .ash/ directly
- Should eliminate SKIPPED scanner status issues
- Copy changed files to temp directory before scanning
- Run 'ash --mode container' on changed files only
- Update paths to use temp directory structure
- Combines container mode reliability with changed-files filtering
- Should resolve both SKIPPED scanners and full-repo scanning issues
- Add debug output to understand why terminal table extraction might be failing
- Container mode is working perfectly (shows FAILED status correctly)
- Need to debug why workflow comment still shows SKIPPED
- Will show table boundaries and extraction logic in workflow logs
- Strip ANSI color escape codes from terminal output before display
- Container mode works perfectly showing correct FAILED status
- Table extraction working (lines 2016-2031)
- Will now display clean scanner table showing:
  - bandit: FAILED (not SKIPPED)
  - detect-secrets: FAILED (not SKIPPED)
  - grype: FAILED (not SKIPPED)
  - semgrep: FAILED (not SKIPPED)
- Add comprehensive scan metadata (project, execution time, ASH version)
- Add detailed explanations of severity levels, result types, and thresholds
- Convert terminal table to proper markdown table format (no more code blocks)
- Table will now render properly in GitHub with correct FAILED status
- Much more informative and professional PR comments
- Includes all the context users need to understand scan results
- Add comprehensive explanations for all severity levels (S/C/H/M/L/I) with examples
- Expand MED to 'MEDIUM (MED)' and explain all threshold levels
- Clarify what each scanner result means (PASSED/FAILED/MISSING/SKIPPED/ERROR)
- Add detailed threshold explanations (CRITICAL/HIGH/MEDIUM/LOW/ALL)
- Explain threshold source indicators: (g)=global, (c)=config, (s)=scanner
- Include specific examples of vulnerability types for each severity level
- Make the scan results much more user-friendly and educational
- Include both commit hash and timestamp in comment header
- Format: 'Latest scan for commit: abc1234 | Updated: 2025-08-27 20:30:42 UTC'
- Workflow updates existing comment instead of creating multiple
- Users can see exactly when scan was last run and for which commit
- Clear tracking of comment update history
- Add unique HTML comment identifier: <!-- ASH-SECURITY-SCAN-COMMENT -->
- Search specifically for this identifier to find the exact ASH comment
- Guarantees the same comment is updated every time, never creates duplicates
- More reliable than searching by text content which could change
- Ensures clean PR comment history with single security scan comment
- Log all bot comments to understand what exists
- Expand search patterns to catch any ASH-related comments
- Add detailed console logging for update/create actions
- Will help identify why both update and create are happening
- Temporary debugging to resolve duplicate comment issue
- Find ALL ASH security scan comments, not just the first one
- Use the most recent comment (highest ID) for updates
- Delete any duplicate/older ASH comments automatically
- Ensures clean PR with only one security scan comment
- Self-healing: cleans up duplicates from previous workflow runs
function runCommand(filename) {
const { exec } = require('child_process');
// Never pass user input directly to exec
exec(`cat ${filename}`, (error, stdout, stderr) => {

Check warning

Code scanning / CodeQL

Unnecessary use of `cat` process Medium

Unnecessary use of cat process. Can be replaced with: fs.readFile(filename, (error, stdout, stderr) => {...})
function hashPassword(password) {
const crypto = require('crypto');
// MD5 is cryptographically broken
return crypto.createHash('md5').update(password).digest('hex');

Check failure

Code scanning / CodeQL

Use of password hash with insufficient computational effort High

Password from
an access to password
is hashed insecurely.
// localStorage with sensitive data
function storeSensitiveData(creditCard, ssn) {
// Never store sensitive data in localStorage
localStorage.setItem('creditCard', creditCard);

Check failure

Code scanning / CodeQL

Clear text storage of sensitive information High

This stores sensitive data returned by
an access to creditCard
as clear text.
function storeSensitiveData(creditCard, ssn) {
// Never store sensitive data in localStorage
localStorage.setItem('creditCard', creditCard);
localStorage.setItem('ssn', ssn);

Check failure

Code scanning / CodeQL

Clear text storage of sensitive information High

This stores sensitive data returned by
an access to ssn
as clear text.
def hash_password(password: str):
"""Uses weak hashing algorithm."""
# MD5 is cryptographically broken, never use for passwords
return hashlib.md5(password.encode()).hexdigest()

Check failure

Code scanning / CodeQL

Use of a broken or weak cryptographic hashing algorithm on sensitive data High

Sensitive data (password)
is used in a hashing algorithm (MD5) that is insecure for password hashing, since it is not a computationally expensive hash function.
Sensitive data (password)
is used in a hashing algorithm (MD5) that is insecure for password hashing, since it is not a computationally expensive hash function.
"""Logs sensitive payment information."""
import logging
# Never log sensitive data
logging.info(f"Processing payment for card: {card_number}, CVV: {cvv}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (private)
as clear text.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant