Documentation
Our WebsiteOur Github
  • 👋Welcome to Infinitode Documentation
  • AI Documentation
  • API Documentation
    • Basic Math API Documentation (#Experimental)
    • BMI Calculator API Documentation
    • Character Counter API Documentation
    • Chemical Equation Balancer API Documentation
    • Color Generator API Documentation
    • Date Difference Calculator API Documentation
    • Dungen API Documentation
    • Dungen Dev API Documentation
    • Factorial Calculator API Documentation
    • Fantasy Name Generator API Documentation
    • Fibonacci Sequence Generator API Documentation
    • GCD Calculator API Documentation
    • Hash API Documentation
    • Helix PSA API Documentation
    • LCM Calculator API Documentation
    • Leap Year Checker API Documentation
    • Lorem API Documentation
    • Molar Mass Calculator API Documentation (#Experimental)
    • MycoNom API Documentation
    • Name Generator API Documentation
    • Palindrome Checker API Documentation
    • Password Generator API Documentation
    • Password Strength Detector API Documentation
    • Periodic Table API Documentation
    • Prime Number Checker API Documentation
    • Quadratic Equation Solver API Documentation
    • Random Facts Generator API Documentation
    • Random Quotes Generator API Documentation
    • Roman Numeral Converter API Documentation
    • Simple Interest Calculator API Documentation
    • Slugify API Documentation
    • Text Case Converter API Documentation
    • Unit Converter API Documentation
    • Username Generator API Documentation
    • UUID Generator API Documentation
    • Vowel Counter API Documentation
  • Package Documentation
    • BlurJS Package Documentation
      • BlurJS Usage Examples
      • BlurJS Reference Documentation
    • CodeSafe Package Documentation
      • CodeSafe Reference
        • CodeSafe Functions
    • DeepDefend Package Documentation
      • DeepDefend Reference
        • Attacks Functions
        • Defenses Functions
    • DupliPy Package Documentation
      • DupliPy Reference
        • Formatting Functions
        • Replication Functions
        • Similarity Functions
        • Text Analysis Functions
    • FuncProfiler Package Documentation
      • FuncProfiler Reference
        • FuncProfiler Functions
    • Hued Package Documentation
      • Hued Reference
        • Analysis Functions
        • Colors Functions
        • Conversions Functions
        • Palettes Functions
    • LocalSiteMap Package Documentation
      • LocalSiteMap Reference
        • LocalSiteMap Functions
    • PyAutoPlot Package Documentation
      • PyAutoPlot Reference
        • PyAutoPlot Functions
    • PyWebScrapr Package Documentation
      • PyWebScrapr Reference
        • PyWebScrapr Functions
    • ValX Package Documentation
      • ValX Reference
        • ValX Functions
Powered by GitBook
On this page
  • FGSM
  • PGD
  • BIM
  • CW
  • Deepfool
  • JSMA

Was this helpful?

  1. Package Documentation
  2. DeepDefend Package Documentation
  3. DeepDefend Reference

Attacks Functions

PreviousDeepDefend ReferenceNextDefenses Functions

Last updated 5 months ago

Was this helpful?

Available functions:

  • (model, x, y, epsilon=0.01): Fast Gradient Sign Method (FGSM) attack.

  • (model, x, y, epsilon=0.01, alpha=0.01, num_steps=10): Projected Gradient Descent (PGD) attack.

  • (model, x, y, epsilon=0.01, alpha=0.01, num_steps=10): Basic Iterative Method (BIM) attack.

  • (model, x, y, epsilon=0.01, c=1, kappa=0, num_steps=10, alpha=0.01): Carlini & Wagner (C&W) attack.

  • (model, x, y, num_steps=10): DeepFool attack.

  • (model, x, y, theta=0.1, gamma=0.1, num_steps=10): Jacobian-based Saliency Map Attack (JSMA).


FGSM

Fast Gradient Sign Method (FGSM) attack.

Parameters:
    model (tensorflow.keras.Model): The target model to attack.
    x (numpy.ndarray): The input example to attack.
    y (numpy.ndarray): The true labels of the input example.
    epsilon (float): The magnitude of the perturbation (default: 0.01).

Returns:
    adversarial_example (numpy.ndarray): The perturbed input example.

PGD

Projected Gradient Descent (PGD) attack.

Parameters:
    model (tensorflow.keras.Model): The target model to attack.
    x (numpy.ndarray): The input example to attack.
    y (numpy.ndarray): The true labels of the input example.
    epsilon (float): The maximum magnitude of the perturbation (default: 0.01).
    alpha (float): The step size for each iteration (default: 0.01).
    num_steps (int): The number of PGD iterations (default: 10).

Returns:
    adversarial_example (numpy.ndarray): The perturbed input example.

BIM

Basic Iterative Method (BIM) attack.

Parameters:
    model (tensorflow.keras.Model): The target model to attack.
    x (numpy.ndarray): The input example to attack.
    y (numpy.ndarray): The true labels of the input example.
    epsilon (float): The maximum magnitude of the perturbation (default: 0.01).
    alpha (float): The step size for each iteration (default: 0.01).
    num_steps (int): The number of BIM iterations (default: 10).

Returns:
    adversarial_example (numpy.ndarray): The perturbed input example.

CW

Carlini & Wagner (C&W) attack.

Parameters:
        model (tensorflow.keras.Model): The target model to attack.
        x (numpy.ndarray): The input example to attack.
        y (numpy.ndarray): The true labels of the input example.
        epsilon (float): The maximum magnitude of the perturbation (default: 0.01).
        c (float): The weight of the L2 norm of the perturbation (default: 1).
        kappa (float): The confidence parameter (default: 0).
        num_steps (int): The number of C&W iterations (default: 10).
        alpha (float): The step size for each iteration (default: 0.01).

Returns:
        adversarial_example (numpy.ndarray): The perturbed input example.

Deepfool

Deepfool attack.

Parameters:
        model (tensorflow.keras.Model): The target model to attack.
        x (numpy.ndarray): The input example to attack.
        y (numpy.ndarray): The true labels of the input example.
        num_steps (int): The number of DeepFool iterations (default: 10).

Returns:
        adversarial_example (numpy.ndarray): The perturbed input example.

JSMA

Jacobian-based Saliency Map Attack (JSMA) attack.

Parameters:
        model (tensorflow.keras.Model): The target model to attack.
        x (numpy.ndarray): The input example to attack.
        y (numpy.ndarray): The true labels of the input example.
        theta (float): The threshold for selecting pixels (default: 0.1).
        gamma (float): The step size for each iteration (default: 0.1).
        num_steps (int): The number of JSMA iterations (default: 10).

Returns:
        adversarial_example (numpy.ndarray): The perturbed input example.
fgsm
pgd
bim
cw
deepfool
jsma