The provided numpy code examples cover a wide range of functionalities including statistical calculations (e.g., mean, variance), machine learning algorithms (e.g., clustering, regression), distance metrics (e.g., Jensen-Shannon, Manhattan), and optimization techniques (e.g., stochastic gradient descent). These examples demonstrate numpy's versatility for various data analysis and scientific computing tasks.
- Creating Arrays:
import numpy as np
# Creating a 1D array
array_1d = np.array([1, 2, 3, 4, 5])
# Creating a 2D array
array_2d = np.array([[1, 2, 3], [4, 5, 6]])
- Array Operations:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
# Element-wise addition
result_add = np.add(a, b)
# Element-wise multiplication
result_multiply = np.multiply(a, b)
- Array Slicing:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Slicing rows and columns
slice_1 = arr[0:2, 1:3] # Selects rows 0 and 1, columns 1 and 2
- Array Reshaping:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
# Reshaping array
reshaped_array = np.reshape(arr, (3, 2))
- Array Broadcasting:
import numpy as np
a = np.array([[1, 2, 3], [4, 5, 6]])
b = np.array([10, 20, 30])
# Broadcasting addition
result = a + b
- Array Transposition:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
# Transposing array
transposed_array = np.transpose(arr)
- Array Concatenation:
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6]])
# Concatenating arrays
result = np.concatenate((a, b), axis=0)
- Array Randomization:
import numpy as np
# Generating random array
random_array = np.random.rand(3, 3)
- Array Reduction:
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
# Summing array elements
result_sum = np.sum(arr)
- Array Comparison:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([1, 4, 3])
# Comparing arrays element-wise
comparison_result = np.array_equal(a, b)
- Array Indexing with Boolean Arrays:
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
# Indexing with boolean arrays
mask = arr > 2
result = arr[mask]
- Array Stacking:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
# Stacking arrays horizontally
result_horizontal = np.hstack((a, b))
# Stacking arrays vertically
result_vertical = np.vstack((a, b))
- Matrix Multiplication:
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6], [7, 8]])
# Matrix multiplication
result = np.matmul(a, b)
- Array Iteration:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Iterating over array elements
for x in np.nditer(arr):
print(x)
- Finding Unique Elements:
import numpy as np
arr = np.array([1, 2, 3, 1, 2, 4])
# Finding unique elements
unique_elements = np.unique(arr)
- Applying Functions Element-Wise:
import numpy as np
arr = np.array([1, 2, 3, 4])
# Applying function element-wise
result = np.sqrt(arr)
- Array Splitting:
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6])
# Splitting array into multiple sub-arrays
result = np.split(arr, [2, 4])
- Finding Maximum and Minimum Values:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
# Finding maximum and minimum values
max_value = np.max(arr)
min_value = np.min(arr)
- Creating Identity Matrix:
import numpy as np
# Creating identity matrix
identity_matrix = np.eye(3)
- Loading Data from File:
import numpy as np
# Loading data from file
data = np.loadtxt('data.txt', delimiter=',')
- Sorting Arrays:
import numpy as np
arr = np.array([3, 1, 2, 5, 4])
# Sorting array
sorted_array = np.sort(arr)
- Finding Indices of Maximum and Minimum Values:
import numpy as np
arr = np.array([3, 1, 5, 2, 4])
# Finding indices of maximum and minimum values
max_index = np.argmax(arr)
min_index = np.argmin(arr)
- Calculating Cumulative Sum and Product:
import numpy as np
arr = np.array([1, 2, 3, 4])
# Calculating cumulative sum and product
cumulative_sum = np.cumsum(arr)
cumulative_product = np.cumprod(arr)
- Finding Intersection and Union of Arrays:
import numpy as np
a = np.array([1, 2, 3, 4])
b = np.array([3, 4, 5, 6])
# Finding intersection and union of arrays
intersection = np.intersect1d(a, b)
union = np.union1d(a, b)
- Applying Custom Functions to Arrays:
import numpy as np
def custom_function(x):
return x ** 2 + 1
arr = np.array([1, 2, 3, 4])
# Applying custom function to array
result = np.apply_along_axis(custom_function, axis=0, arr=arr)
- Reshaping Arrays with Unknown Dimensions:
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6])
# Reshaping array with unknown dimension
reshaped_array = np.reshape(arr, (-1, 2))
- Creating Diagonal Matrices:
import numpy as np
# Creating diagonal matrices
diag_matrix = np.diag([1, 2, 3])
- Calculating Eigenvalues and Eigenvectors:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(arr)
- Calculating Dot Product of Arrays:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
# Calculating dot product of arrays
dot_product = np.dot(a, b)
- Filling Arrays with Specific Values:
import numpy as np
# Filling array with specific values
filled_array = np.full((3, 3), 5)
- Calculating Element-Wise Exponential:
import numpy as np
arr = np.array([1, 2, 3])
# Calculating element-wise exponential
result = np.exp(arr)
- Calculating Element-Wise Logarithm:
import numpy as np
arr = np.array([1, 2, 3])
# Calculating element-wise logarithm
result = np.log(arr)
- Finding Nonzero Elements:
import numpy as np
arr = np.array([[1, 0, 2], [0, 3, 0]])
# Finding nonzero elements
nonzero_indices = np.nonzero(arr)
- Calculating Trigonometric Functions:
import numpy as np
arr = np.array([0, np.pi/2, np.pi])
# Calculating trigonometric functions
sin_values = np.sin(arr)
cos_values = np.cos(arr)
- Generating Meshgrid:
import numpy as np
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
# Generating meshgrid
X, Y = np.meshgrid(x, y)
- Calculating Element-Wise Square Root:
import numpy as np
arr = np.array([1, 4, 9])
# Calculating element-wise square root
result = np.sqrt(arr)
- Finding Unique Rows in a 2D Array:
import numpy as np
arr = np.array([[1, 2], [1, 2], [3, 4]])
# Finding unique rows
unique_rows = np.unique(arr, axis=0)
- Finding Diagonal Elements:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Finding diagonal elements
diagonal_elements = np.diagonal(arr)
- Creating a Random Integer Array:
import numpy as np
# Creating a random integer array
random_array = np.random.randint(1, 10, size=(3, 3))
- Reshaping Arrays with Flattening:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Reshaping array with flattening
flattened_array = arr.flatten()
- Finding the Determinant of a Matrix:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Finding the determinant of a matrix
determinant = np.linalg.det(arr)
- Calculating Matrix Inverse:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix inverse
inverse_matrix = np.linalg.inv(arr)
- Calculating Matrix Trace:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix trace
matrix_trace = np.trace(arr)
- Calculating Matrix Rank:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix rank
matrix_rank = np.linalg.matrix_rank(arr)
- Finding Eigenvalues and Eigenvectors of Symmetric Matrix:
import numpy as np
arr = np.array([[1, 2], [2, 1]])
# Finding eigenvalues and eigenvectors of symmetric matrix
eigenvalues, eigenvectors = np.linalg.eigh(arr)
- Creating a Sparse Matrix:
import numpy as np
from scipy.sparse import csr_matrix
# Creating a sparse matrix
sparse_matrix = csr_matrix((3, 3), dtype=np.int8).toarray()
- Performing Linear Interpolation:
import numpy as np
x = np.array([1, 2, 3, 4])
y = np.array([10, 20, 30, 40])
# Performing linear interpolation
interp_values = np.interp(2.5, x, y)
- Performing Polynomial Interpolation:
import numpy as np
x = np.array([1, 2, 3, 4])
y = np.array([10, 20, 30, 40])
# Performing polynomial interpolation
poly_coeffs = np.polyfit(x, y, 2)
interp_values = np.polyval(poly_coeffs, [2.5, 3.5])
- Calculating Cross Product of Vectors:
import numpy as np
a = np.array([1, 0, 0])
b = np.array([0, 1, 0])
# Calculating cross product of vectors
cross_product = np.cross(a, b)
- Finding Angle Between Vectors:
import numpy as np
a = np.array([1, 0])
b = np.array([0, 1])
# Finding angle between vectors
angle = np.arccos(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))
- Calculating Cumulative Sum Along a Specified Axis:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
# Calculating cumulative sum along axis 0
cumulative_sum_axis_0 = np.cumsum(arr, axis=0)
- Calculating Cumulative Product Along a Specified Axis:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
# Calculating cumulative product along axis 1
cumulative_product_axis_1 = np.cumprod(arr, axis=1)
- Finding Unique Elements and Their Counts:
import numpy as np
arr = np.array([1, 1, 2, 2, 2, 3])
# Finding unique elements and their counts
unique_elements, counts = np.unique(arr, return_counts=True)
- Calculating Matrix Exponential:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix exponential
matrix_exponential = np.linalg.matrix_power(arr, 2)
- Calculating Frobenius Norm of a Matrix:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating Frobenius norm of a matrix
frobenius_norm = np.linalg.norm(arr)
- Finding Indices to Insert Elements to Maintain Order:
import numpy as np
arr = np.array([1, 3, 5, 7])
# Finding indices to insert elements to maintain order
indices = np.searchsorted(arr, [2, 4, 6])
- Calculating Matrix Condition Number:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix condition number
condition_number = np.linalg.cond(arr)
- Calculating Matrix Determinant and Log Determinant:
import numpy as np
arr = np.array([[1, 2], [2, 1]])
# Calculating matrix determinant and log determinant
determinant = np.linalg.det(arr)
log_determinant = np.linalg.slogdet(arr)
- Finding Permutations and Combinations of Arrays:
import numpy as np
from itertools import permutations, combinations
arr = np.array([1, 2, 3])
# Finding permutations and combinations of arrays
permutations_result = np.array(list(permutations(arr)))
combinations_result = np.array(list(combinations(arr, 2)))
- Finding Smallest and Largest N Values in an Array:
import numpy as np
arr = np.array([1, 3, 5, 7, 2, 4, 6, 8])
# Finding smallest and largest N values
smallest_3_values = np.partition(arr, 3)[:3]
largest_3_values = np.partition(arr, -3)[-3:]
- Finding the Kronecker Product of Two Arrays:
import numpy as np
arr1 = np.array([[1, 2], [3, 4]])
arr2 = np.array([[5, 6], [7, 8]])
# Finding the Kronecker product
kronecker_product = np.kron(arr1, arr2)
- Finding the Singular Value Decomposition (SVD) of a Matrix:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Performing Singular Value Decomposition (SVD)
U, S, V = np.linalg.svd(arr)
- Finding the Moore-Penrose Pseudo Inverse of a Matrix:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Finding the Moore-Penrose Pseudo Inverse
pseudo_inverse = np.linalg.pinv(arr)
- Calculating the Discrete Fourier Transform (DFT):
import numpy as np
arr = np.array([1, 2, 3, 4])
# Calculating Discrete Fourier Transform (DFT)
dft = np.fft.fft(arr)
- Calculating the Inverse Discrete Fourier Transform (IDFT):
import numpy as np
arr = np.array([1, 2, 3, 4])
# Calculating Inverse Discrete Fourier Transform (IDFT)
idft = np.fft.ifft(arr)
- Generating Random Numbers from the Standard Normal Distribution:
import numpy as np
# Generating random numbers from the standard normal distribution
random_numbers = np.random.randn(3, 3)
- Calculating the Moore-Penrose Generalized Inverse of a Matrix:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating the Moore-Penrose Generalized Inverse
gen_inverse = np.linalg.pinv(arr)
- Calculating the Convolution of Two Arrays:
import numpy as np
arr1 = np.array([1, 2, 3])
arr2 = np.array([0, 1, 0.5])
# Calculating the convolution of two arrays
convolution_result = np.convolve(arr1, arr2, mode='same')
- Finding the 2D Fast Fourier Transform (FFT):
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Finding the 2D Fast Fourier Transform (FFT)
fft_2d = np.fft.fft2(arr)
- Performing Linear Regression:
import numpy as np
x = np.array([0, 1, 2, 3, 4])
y = np.array([1, 3, 5, 7, 9])
# Performing linear regression
coefficients = np.polyfit(x, y, 1)
- Calculating Matrix Power:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix power
matrix_power = np.linalg.matrix_power(arr, 3)
- Calculating Cross-Correlation of Arrays:
import numpy as np
arr1 = np.array([1, 2, 3])
arr2 = np.array([0, 1, 0.5])
# Calculating cross-correlation of arrays
cross_correlation = np.correlate(arr1, arr2, mode='valid')
- Calculating Pearson Correlation Coefficient:
import numpy as np
arr1 = np.array([1, 2, 3, 4, 5])
arr2 = np.array([5, 4, 3, 2, 1])
# Calculating Pearson correlation coefficient
pearson_coefficient = np.corrcoef(arr1, arr2)[0, 1]
- Calculating Covariance Matrix:
import numpy as np
arr1 = np.array([1, 2, 3])
arr2 = np.array([4, 5, 6])
# Calculating covariance matrix
covariance_matrix = np.cov(arr1, arr2)
- Finding Roots of Polynomials:
import numpy as np
coefficients = np.array([1, -3, 2]) # Polynomial: x^2 - 3x + 2
# Finding roots of polynomial
roots = np.roots(coefficients)
- Calculating Kruskal-Wallis H Test:
import numpy as np
from scipy.stats import kruskal
group1 = np.array([1, 2, 3])
group2 = np.array([4, 5, 6])
group3 = np.array([7, 8, 9])
# Performing Kruskal-Wallis H test
H_statistic, p_value = kruskal(group1, group2, group3)
- Finding the Least Squares Solution to a Linear Matrix Equation:
import numpy as np
A = np.array([[1, 2], [3, 4]])
b = np.array([5, 6])
# Finding the least squares solution
solution = np.linalg.lstsq(A, b, rcond=None)[0]
- Generating Logarithmically Spaced Numbers:
import numpy as np
logspace_values = np.logspace(1, 3, num=5, base=10.0)
# Generating logarithmically spaced numbers
# Start: 10^1, End: 10^3, 5 values
- Calculating Bessel Functions:
import numpy as np
from scipy.special import jn
# Calculating Bessel functions of the first kind
bessel_values = jn(1, np.arange(5))
- Calculating Hyperbolic Functions:
import numpy as np
arr = np.array([0, 1, 2])
# Calculating hyperbolic sine, cosine, and tangent
sinh_values = np.sinh(arr)
cosh_values = np.cosh(arr)
tanh_values = np.tanh(arr)
- Calculating Multinomial Coefficients:
import numpy as np
# Calculating multinomial coefficients
coefficients = np.math.comb(10, [3, 4, 3])
- Calculating Exponential Moving Average (EMA):
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# Calculating Exponential Moving Average (EMA)
ema = np.convolve(data, np.ones(3), mode='valid') / 3
- Calculating Binomial Coefficients:
import numpy as np
# Calculating binomial coefficients
coefficients = np.array([np.math.comb(5, k) for k in range(6)])
- Calculating Harmonic Mean:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# Calculating Harmonic Mean
harmonic_mean = len(data) / np.sum(1.0 / data)
- Calculating Weighted Average:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
weights = np.array([1, 2, 3, 4, 5])
# Calculating Weighted Average
weighted_average = np.average(data, weights=weights)
- Calculating Factorial:
import numpy as np
# Calculating factorial
factorial = np.math.factorial(5)
- Calculating Cumulative Maximum and Minimum:
import numpy as np
data = np.array([1, 3, 2, 5, 4])
# Calculating cumulative maximum and minimum
cumulative_max = np.maximum.accumulate(data)
cumulative_min = np.minimum.accumulate(data)
- Calculating GCD and LCM:
import numpy as np
# Calculating greatest common divisor (GCD)
gcd = np.gcd.reduce([24, 36, 48])
# Calculating least common multiple (LCM)
lcm = np.lcm.reduce([6, 8, 12])
- Calculating Fermat's Little Theorem:
import numpy as np
# Calculating Fermat's Little Theorem
result = np.mod(np.power(2, 17), 17)
- Calculating the Error Function:
import numpy as np
from scipy.special import erf
# Calculating the error function
result = erf(0.5)
- Calculating Variance and Standard Deviation:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# Calculating variance and standard deviation
variance = np.var(data)
std_deviation = np.std(data)
- Calculating Covariance:
import numpy as np
data1 = np.array([1, 2, 3, 4, 5])
data2 = np.array([5, 4, 3, 2, 1])
# Calculating covariance
covariance = np.cov(data1, data2)[0, 1]
- Generating Random Numbers from a Uniform Distribution:
import numpy as np
# Generating random numbers from a uniform distribution
random_uniform = np.random.uniform(0, 1, size=(3, 3))
- Generating Random Numbers from a Normal Distribution:
import numpy as np
# Generating random numbers from a normal distribution
random_normal = np.random.normal(0, 1, size=(3, 3))
- Calculating Cumulative Distribution Function (CDF):
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# Calculating cumulative distribution function (CDF)
cdf = np.cumsum(data) / np.sum(data)
- Calculating Percentile:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# Calculating percentile
percentile = np.percentile(data, 50) # 50th percentile
- Calculating Weighted Percentile:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
weights = np.array([1, 2, 3, 4, 5])
# Calculating weighted percentile
weighted_percentile = np.percentile(data, 50, weights=weights)
- Calculating Geometric Mean:
import numpy as np
data = np.array([1, 2, 4, 8, 16])
# Calculating geometric mean
geometric_mean = np.prod(data) ** (1 / len(data))
- Performing T-test:
import numpy as np
from scipy.stats import ttest_ind
group1 = np.array([1, 2, 3, 4, 5])
group2 = np.array([6, 7, 8, 9, 10])
# Performing T-test
t_statistic, p_value = ttest_ind(group1, group2)
- Calculating Poisson Distribution:
import numpy as np
from scipy.stats import poisson
# Calculating Poisson distribution
poisson_values = poisson.pmf(np.arange(10), mu=3)
- Calculating Bernoulli Numbers:
import numpy as np
from scipy.special import bernoulli
# Calculating Bernoulli numbers
bernoulli_numbers = bernoulli(5)
- Calculating Beta Function:
import numpy as np
from scipy.special import beta
# Calculating Beta function
result = beta(2, 3)
- Calculating Binomial Probability Mass Function:
import numpy as np
from scipy.stats import binom
# Calculating Binomial probability mass function
binomial_pmf = binom.pmf(3, 5, 0.5)
- Calculating Cauchy Principal Value:
import numpy as np
from scipy.special import pv
# Calculating Cauchy principal value
cauchy_pv = pv(1, 0)
- Calculating Chi-Square Test:
import numpy as np
from scipy.stats import chisquare
observed = np.array([10, 15, 20])
expected = np.array([12, 15, 18])
# Performing Chi-Square test
chi2, p_value = chisquare(observed, expected)
- Calculating Cumulative Distribution Function (CDF):
import numpy as np
from scipy.stats import norm
# Calculating cumulative distribution function (CDF) of a normal distribution
cdf = norm.cdf(0)
- Calculating Hypergeometric Distribution:
import numpy as np
from scipy.stats import hypergeom
# Calculating Hypergeometric distribution
hypergeom_dist = hypergeom.pmf(1, 10, 5, 3)
- Calculating Kolmogorov-Smirnov Test:
import numpy as np
from scipy.stats import kstest
data = np.random.normal(0, 1, 100)
# Performing Kolmogorov-Smirnov test
statistic, p_value = kstest(data, 'norm')
- Calculating Logistic Distribution:
import numpy as np
from scipy.stats import logistic
# Calculating logistic distribution
logistic_dist = logistic.cdf(0)
- Calculating Poisson Distribution:
import numpy as np
from scipy.stats import poisson
# Calculating Poisson distribution
poisson_dist = poisson.pmf(3, 5)
- Calculating Exponential Distribution:
import numpy as np
from scipy.stats import expon
# Calculating Exponential distribution
exponential_dist = expon.cdf(2, scale=1/3)
- Calculating Geometric Distribution:
import numpy as np
from scipy.stats import geom
# Calculating Geometric distribution
geometric_dist = geom.pmf(2, p=0.5)
- Calculating Gumbel Distribution:
import numpy as np
from scipy.stats import gumbel_r
# Calculating Gumbel distribution
gumbel_dist = gumbel_r.cdf(2)
- Calculating Laplace Distribution:
import numpy as np
from scipy.stats import laplace
# Calculating Laplace distribution
laplace_dist = laplace.cdf(2)
- Calculating Log-Normal Distribution:
import numpy as np
from scipy.stats import lognorm
# Calculating Log-Normal distribution
lognormal_dist = lognorm.cdf(2, s=0.5)
- Calculating Rayleigh Distribution:
import numpy as np
from scipy.stats import rayleigh
# Calculating Rayleigh distribution
rayleigh_dist = rayleigh.cdf(2, scale=1)
- Calculating Student's t Distribution:
import numpy as np
from scipy.stats import t
# Calculating Student's t distribution
t_dist = t.cdf(2, df=5)
- Calculating Weibull Distribution:
import numpy as np
from scipy.stats import weibull_min
# Calculating Weibull distribution
weibull_dist = weibull_min.cdf(2, c=1.5)
- Calculating Zipf Distribution:
import numpy as np
from scipy.stats import zipf
# Calculating Zipf distribution
zipf_dist = zipf.pmf(2, a=2)
- Performing One-Sample t-test:
import numpy as np
from scipy.stats import ttest_1samp
data = np.random.normal(0, 1, 100)
# Performing one-sample t-test
t_statistic, p_value = ttest_1samp(data, 0)
- Performing Two-Sample t-test:
import numpy as np
from scipy.stats import ttest_ind
data1 = np.random.normal(0, 1, 100)
data2 = np.random.normal(1, 1, 100)
# Performing two-sample t-test
t_statistic, p_value = ttest_ind(data1, data2)
- Performing Paired t-test:
import numpy as np
from scipy.stats import ttest_rel
data1 = np.random.normal(0, 1, 100)
data2 = data1 + np.random.normal(0, 0.5, 100)
# Performing paired t-test
t_statistic, p_value = ttest_rel(data1, data2)
- Performing Chi-Square Test of Independence:
import numpy as np
from scipy.stats import chi2_contingency
observed = np.array([[10, 5], [15, 20]])
# Performing Chi-Square test of independence
chi2, p_value, dof, expected = chi2_contingency(observed)
- Performing One-Way ANOVA:
import numpy as np
from scipy.stats import f_oneway
group1 = np.random.normal(0, 1, 100)
group2 = np.random.normal(1, 1, 100)
group3 = np.random.normal(2, 1, 100)
# Performing one-way ANOVA
f_statistic, p_value = f_oneway(group1, group2, group3)
- Performing Friedman Test:
import numpy as np
from scipy.stats import friedmanchisquare
group1 = np.random.normal(0, 1, 100)
group2 = np.random.normal(1, 1, 100)
group3 = np.random.normal(2, 1, 100)
# Performing Friedman test
chi2, p_value = friedmanchisquare(group1, group2, group3)
- Calculating Critical Values of Student's t-distribution:
import numpy as np
from scipy.stats import t
# Calculating critical values of Student's t-distribution
critical_values = t.ppf([0.025, 0.975], df=10)
- Calculating Inverse of Student's t-distribution:
import numpy as np
from scipy.stats import t
# Calculating inverse of Student's t-distribution
inverse_t = t.ppf(0.975, df=10)
- Calculating Quantiles of Student's t-distribution:
import numpy as np
from scipy.stats import t
# Calculating quantiles of Student's t-distribution
quantiles = t.ppf([0.25, 0.75], df=10)
- Performing Mann-Whitney U Test:
import numpy as np
from scipy.stats import mannwhitneyu
group1 = np.random.normal(0, 1, 100)
group2 = np.random.normal(1, 1, 100)
# Performing Mann-Whitney U test
U_statistic, p_value = mannwhitneyu(group1, group2)
- Performing Kruskal-Wallis H Test:
import numpy as np
from scipy.stats import kruskal
group1 = np.random.normal(0, 1, 100)
group2 = np.random.normal(1, 1, 100)
group3 = np.random.normal(2, 1, 100)
# Performing Kruskal-Wallis H test
H_statistic, p_value = kruskal(group1, group2, group3)
Sure, here are 10 more numpy code examples without repeating previous responses:
- Calculating Power of a Matrix:
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating power of a matrix
power_matrix = np.linalg.matrix_power(arr, 3)
- Solving Linear Equation System:
import numpy as np
A = np.array([[2, 3], [5, 4]])
b = np.array([4, 3])
# Solving linear equation system Ax = b
solution = np.linalg.solve(A, b)
- Finding Euclidean Distance:
import numpy as np
point1 = np.array([1, 2])
point2 = np.array([4, 6])
# Finding Euclidean distance
euclidean_distance = np.linalg.norm(point1 - point2)
- Finding Manhattan Distance:
import numpy as np
point1 = np.array([1, 2])
point2 = np.array([4, 6])
# Finding Manhattan distance
manhattan_distance = np.sum(np.abs(point1 - point2))
- Finding Chebyshev Distance:
import numpy as np
point1 = np.array([1, 2])
point2 = np.array([4, 6])
# Finding Chebyshev distance
chebyshev_distance = np.max(np.abs(point1 - point2))
- Calculating Matrix Rank:
import numpy as np
arr = np.array([[1, 2], [3, 4], [5, 6]])
# Calculating matrix rank
matrix_rank = np.linalg.matrix_rank(arr)
- Solving Ordinary Differential Equations (ODE):
import numpy as np
from scipy.integrate import solve_ivp
# Define the ODE
def ode(t, y):
return y + t
# Solve the ODE
solution = solve_ivp(ode, [0, 1], [0])
- Calculating Vector Angle:
import numpy as np
vector1 = np.array([1, 0])
vector2 = np.array([0, 1])
# Calculating vector angle (in radians)
angle_rad = np.arccos(np.dot(vector1,
vector2) / (np.linalg.norm(vector1) * np.linalg.norm(vector2)))
- Finding Matrix Eigenvalues and Eigenvectors:
import numpy as np
arr = np.array([[1, 2], [2, 1]])
# Finding matrix eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(arr)
- Calculating Matrix Singular Value Decomposition (SVD):
import numpy as np
arr = np.array([[1, 2], [3, 4]])
# Calculating matrix singular value decomposition (SVD)
U, S, VT = np.linalg.svd(arr)
- Calculating Jacobian Matrix:
import numpy as np
# Define the function
def f(x):
return np.array([x[0] ** 2, np.sin(x[1])])
# Define the point
x = np.array([1.0, np.pi/4])
# Calculating Jacobian matrix
Jacobian = np.array([np.gradient(f(x), x) for x in x])
- Solving Nonlinear Equations:
from scipy.optimize import fsolve
# Define the equation
def equations(x):
return [x[0] + 2 * x[1] - 3, x[0] ** 2 + x[1] ** 2 - 1]
# Solving the nonlinear equations
solution = fsolve(equations, [1, 1])
- Performing Principal Component Analysis (PCA):
import numpy as np
from sklearn.decomposition import PCA
# Generate random data
data = np.random.rand(10, 5)
# Perform PCA
pca = PCA(n_components=3)
pca.fit(data)
- Calculating Moore-Penrose Pseudo Inverse:
import numpy as np
# Define matrix
A = np.array([[1, 2], [3, 4]])
# Calculate Moore-Penrose Pseudo Inverse
pseudo_inverse = np.linalg.pinv(A)
- Generating Meshgrid for 2D Plotting:
import numpy as np
import matplotlib.pyplot as plt
# Generate meshgrid
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
# Plot grid
plt.scatter(X, Y)
plt.show()
- Calculating Cosine Similarity:
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
# Define vectors
a = np.array([[1, 2]])
b = np.array([[2, 3]])
# Calculate cosine similarity
similarity = cosine_similarity(a, b)
- Calculating Jaccard Similarity:
import numpy as np
from sklearn.metrics import jaccard_similarity_score
# Define arrays
a = np.array([1, 2, 3, 4])
b = np.array([2, 3, 4, 5])
# Calculate Jaccard similarity
similarity = jaccard_similarity_score(a, b)
- Finding Hessian Matrix:
import numpy as np
from scipy.optimize import rosen, rosen_hess, rosen_hess_prod
# Define a point
x = np.array([1.3, 0.7])
# Calculate Hessian matrix
hessian = rosen_hess(x)
- Calculating Vandermonde Matrix:
import numpy as np
# Generate array
x = np.array([1, 2, 3, 4])
# Calculate Vandermonde matrix
vander_matrix = np.vander(x)
- Calculating QR Decomposition:
import numpy as np
# Define matrix
A = np.array([[1, 2], [3, 4], [5, 6]])
# Calculate QR decomposition
Q, R = np.linalg.qr(A)
- Solving Linear Programming Problem:
import numpy as np
from scipy.optimize import linprog
c = np.array([-1, 4]) # Coefficients of the objective function
A = np.array([[3, 1], [1, 2]]) # Coefficients of inequality constraints
b = np.array([9, 8]) # Right-hand side of inequality constraints
# Solve linear programming problem
result = linprog(c, A_ub=A, b_ub=b)
- Calculating Mahalanobis Distance:
import numpy as np
from scipy.spatial.distance import mahalanobis
x = np.array([1, 2, 3]) # Data point
mu = np.array([0, 0, 0]) # Mean of distribution
cov = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) # Covariance matrix
# Calculate Mahalanobis distance
distance = mahalanobis(x, mu, np.linalg.inv(cov))
- Performing Singular Value Decomposition (SVD):
import numpy as np
A = np.array([[1, 2], [3, 4], [5, 6]])
# Perform Singular Value Decomposition (SVD)
U, S, VT = np.linalg.svd(A)
- Calculating Matrix Trace:
import numpy as np
A = np.array([[1, 2], [3, 4]])
# Calculate matrix trace
trace = np.trace(A)
- Calculating Entropy:
import numpy as np
# Define probabilities
probabilities = np.array([0.3, 0.5, 0.2])
# Calculate entropy
entropy = -np.sum(probabilities * np.log2(probabilities))
- Calculating Mutual Information:
import numpy as np
from sklearn.metrics import mutual_info_score
# Define true labels and predicted labels
true_labels = np.array([0, 1, 0, 1])
predicted_labels = np.array([0, 1, 1, 0])
# Calculate mutual information
mutual_information = mutual_info_score(true_labels, predicted_labels)
- Calculating Manhattan (L1) Norm:
import numpy as np
v = np.array([1, -2, 3])
# Calculate Manhattan (L1) norm
manhattan_norm = np.linalg.norm(v, ord=1)
- Calculating Euclidean (L2) Norm:
import numpy as np
v = np.array([1, -2, 3])
# Calculate Euclidean (L2) norm
euclidean_norm = np.linalg.norm(v, ord=2)
- Calculating Frobenius Norm of a Matrix:
import numpy as np
A = np.array([[1, 2], [3, 4]])
# Calculate Frobenius norm
frobenius_norm = np.linalg.norm(A, ord='fro')
- Calculating Kruskal-Wallis H Test:
import numpy as np
from scipy.stats import kruskal
# Define data groups
group1 = np.random.normal(0, 1, 100)
group2 = np.random.normal(1, 1, 100)
group3 = np.random.normal(2, 1, 100)
# Perform Kruskal-Wallis H test
H_statistic, p_value = kruskal(group1, group2, group3)
- Calculating Precision and Recall:
import numpy as np
from sklearn.metrics import precision_score, recall_score
# Define true labels and predicted labels
true_labels = np.array([0, 1, 0, 1])
predicted_labels = np.array([0, 1, 1, 0])
# Calculate precision and recall
precision = precision_score(true_labels, predicted_labels)
recall = recall_score(true_labels, predicted_labels)
- Calculating F1 Score:
import numpy as np
from sklearn.metrics import f1_score
# Define true labels and predicted labels
true_labels = np.array([0, 1, 0, 1])
predicted_labels = np.array([0, 1, 1, 0])
# Calculate F1 score
f1 = f1_score(true_labels, predicted_labels)
- Calculating R2 Score:
import numpy as np
from sklearn.metrics import r2_score
# Define true values and predicted values
true_values = np.array([1, 2, 3, 4])
predicted_values = np.array([1.1, 2.1, 2.9, 4.2])
# Calculate R2 score
r2 = r2_score(true_values, predicted_values)
- Calculating Mean Squared Error (MSE):
import numpy as np
from sklearn.metrics import mean_squared_error
# Define true values and predicted values
true_values = np.array([1, 2, 3, 4])
predicted_values = np.array([1.1, 2.1, 2.9, 4.2])
# Calculate Mean Squared Error (MSE)
mse = mean_squared_error(true_values, predicted_values)
- Calculating Root Mean Squared Error (RMSE):
import numpy as np
from sklearn.metrics import mean_squared_error
# Define true values and predicted values
true_values = np.array([1, 2, 3, 4])
predicted_values = np.array([1.1, 2.1, 2.9, 4.2])
# Calculate Root Mean Squared Error (RMSE)
rmse = np.sqrt(mean_squared_error(true_values, predicted_values))
- Calculating Mean Absolute Error (MAE):
import numpy as np
from sklearn.metrics import mean_absolute_error
# Define true values and predicted values
true_values = np.array([1, 2, 3, 4])
predicted_values = np.array([1.1, 2.1, 2.9, 4.2])
# Calculate Mean Absolute Error (MAE)
mae = mean_absolute_error(true_values, predicted_values)
- Finding Local Minima/Maxima of 1D Array:
import numpy as np
# Define 1D array
arr = np.array([1, 2, 3, 2, 1])
# Find local minima and maxima
minima_indices = np.where((arr[:-2] > arr[1:-1]) & (arr[1:-1] < arr[2:]))[0] + 1
maxima_indices = np.where((arr[:-2] < arr[1:-1]) & (arr[1:-1] > arr[2:]))[0] + 1
- Calculating Shannon Entropy:
import numpy as np
# Define probabilities
probabilities = np.array([0.3, 0.5, 0.2])
# Calculate Shannon entropy
entropy = -np.sum(probabilities * np.log2(probabilities))
- Generating Random Numbers from Custom Distribution:
import numpy as np
# Define custom distribution parameters
a, b = 1, 3
# Generate random numbers from custom distribution
random_numbers = a + (b - a) * np.random.random(1000)
- Solving Differential Equations Using odeint:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# Define the ODE system
def model(y, t):
return -y + 1
# Initial condition
y0 = 0
# Time points
t = np.linspace(0, 5, 100)
# Solve the ODE system
y = odeint(model, y0, t)
# Plot the solution
plt.plot(t, y)
plt.xlabel('Time')
plt.ylabel('y(t)')
plt.show()
- Calculating Wasserstein Distance:
import numpy as np
from scipy.stats import wasserstein_distance
# Define two distributions
dist1 = np.array([0.1, 0.2, 0.3, 0.4])
dist2 = np.array([0.2, 0.3, 0.4, 0.1])
# Calculate Wasserstein distance
wasserstein_dist = wasserstein_distance(dist1, dist2)
- Performing Linear Regression:
import numpy as np
from sklearn.linear_model import LinearRegression
# Generate sample data
X = np.array([[1], [2], [3], [4]])
y = np.array([2, 4, 6, 8])
# Perform linear regression
model = LinearRegression().fit(X, y)
- Generating Random Numbers from Normal Distribution:
import numpy as np
# Generate random numbers from normal distribution
random_numbers = np.random.normal(loc=0, scale=1, size=1000)
- Calculating Covariance Matrix:
import numpy as np
# Define sample data
data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Calculate covariance matrix
cov_matrix = np.cov(data, rowvar=False)
- Performing Kernel Density Estimation:
import numpy as np
from sklearn.neighbors import KernelDensity
# Generate sample data
data = np.array([1, 2, 2, 3, 3, 3, 4, 4, 5])
# Perform kernel density estimation
kde = KernelDensity(bandwidth=0.5).fit(data[:, None])
- Calculating Mann-Whitney U Test:
import numpy as np
from scipy.stats import mannwhitneyu
# Define sample data
group1 = np.array([1, 2, 3, 4, 5])
group2 = np.array([6, 7, 8, 9, 10])
# Perform Mann-Whitney U test
statistic, p_value = mannwhitneyu(group1, group2)
- Performing Kruskal-Wallis H Test with Post-Hoc Analysis:
import numpy as np
from scipy.stats import kruskal
from scikit_posthocs import posthoc_dunn
# Define sample data
group1 = np.array([1, 2, 3, 4, 5])
group2 = np.array([6, 7, 8, 9, 10])
group3 = np.array([11, 12, 13, 14, 15])
# Perform Kruskal-Wallis H test
H_statistic, p_value = kruskal(group1, group2, group3)
# Perform post-hoc Dunn's test
posthoc_results = posthoc_dunn([group1, group2, group3])
- Finding the Index of the Maximum Value in an Array:
import numpy as np
# Define array
arr = np.array([5, 2, 8, 4, 6])
# Find the index of the maximum value
max_index = np.argmax(arr)
- Finding the Index of the Minimum Value in an Array:
import numpy as np
# Define array
arr = np.array([5, 2, 8, 4, 6])
# Find the index of the minimum value
min_index = np.argmin(arr)
- Performing K-Means Clustering:
import numpy as np
from sklearn.cluster import KMeans
# Generate sample data
X = np.array([[1, 2], [2, 3], [8, 7], [10, 8], [12, 10]])
# Perform K-Means clustering
kmeans = KMeans(n_clusters=2).fit(X)
- Calculating Expectation and Variance of a Random Variable:
import numpy as np
# Define random variable
X = np.array([1, 2, 3, 4, 5])
# Calculate expectation and variance
expectation = np.mean(X)
variance = np.var(X)
- Performing Hierarchical Clustering:
import numpy as np
from scipy.cluster.hierarchy import linkage, dendrogram
import matplotlib.pyplot as plt
# Generate sample data
X = np.array([[1, 2], [2, 3], [8, 7], [10, 8], [12, 10]])
# Perform hierarchical clustering
Z = linkage(X, method='ward')
# Plot dendrogram
plt.figure(figsize=(8, 6))
dendrogram(Z)
plt.show()
- Generating Random Symmetric Positive-Definite Matrix:
import numpy as np
# Generate random symmetric positive-definite matrix
n = 3 # Size of the matrix
A = np.random.rand(n, n)
symmetric_positive_definite_matrix = np.dot(A, A.T)
- Solving Quadratic Programming Problem:
import numpy as np
from scipy.optimize import minimize
# Define quadratic objective function
Q = np.array([[1, 0], [0, 1]])
c = np.array([0, 0])
# Define linear constraints
A = np.array([[1, 1]])
b = np.array([1])
# Solve quadratic programming problem
result = minimize(lambda x: 0.5 * np.dot(x, np.dot(Q, x)) + np.dot(c, x), [0, 0], constraints={'type': 'eq', 'fun': lambda x: np.dot(A, x) - b})
- Performing Independent Component Analysis (ICA):
import numpy as np
from sklearn.decomposition import FastICA
# Generate sample data
X = np.random.rand(100, 3)
# Perform Independent Component Analysis (ICA)
ica = FastICA(n_components=3)
components = ica.fit_transform(X)
- Calculating Autocorrelation Function:
import numpy as np
# Generate sample data
data = np.random.rand(100)
# Calculate autocorrelation function
autocorrelation = np.correlate(data, data, mode='full')
- Calculating Cross-Correlation Function:
import numpy as np
# Generate sample data
x = np.random.rand(100)
y = np.random.rand(100)
# Calculate cross-correlation function
cross_correlation = np.correlate(x, y, mode='full')
- Performing Gaussian Mixture Model (GMM) Clustering:
import numpy as np
from sklearn.mixture import GaussianMixture
# Generate sample data
X = np.random.rand(100, 2)
# Perform Gaussian Mixture Model (GMM) clustering
gmm = GaussianMixture(n_components=3)
gmm.fit(X)
- Finding Unique Elements and Their Counts in an Array:
import numpy as np
# Define array
arr = np.array([1, 2, 3, 1, 2, 1, 3, 4, 5])
# Find unique elements and their counts
unique_elements, counts = np.unique(arr, return_counts=True)
- Performing Bayesian Linear Regression:
import numpy as np
import pymc3 as pm
# Generate sample data
X = np.random.rand(100, 2)
y = np.random.rand(100)
# Perform Bayesian linear regression
with pm.Model() as model:
intercept = pm.Normal('intercept', mu=0, sigma=1)
coefficients = pm.Normal('coefficients', mu=0, sigma=1, shape=X.shape[1])
sigma = pm.HalfNormal('sigma', sigma=1)
y_pred = intercept + pm.math.dot(X, coefficients)
likelihood = pm.Normal('y', mu=y_pred, sigma=sigma, observed=y)
trace = pm.sample(1000)
- Performing Bayesian Optimization:
import numpy as np
from scipy.optimize import minimize
# Define objective function
def objective(x):
return (x - 2) ** 2 + np.random.normal(0, 0.1)
# Perform Bayesian optimization
result = minimize(objective, x0=0)
- Performing Singular Spectrum Analysis (SSA):
import numpy as np
from sklearn.decomposition import PCA
# Generate sample data
X = np.random.rand(100, 5)
# Perform Singular Spectrum Analysis (SSA)
pca = PCA(n_components=5)
components = pca.fit_transform(X)
- Performing Non-negative Matrix Factorization (NMF):
import numpy as np
from sklearn.decomposition import NMF
# Generate sample data
X = np.random.rand(100, 5)
# Perform Non-negative Matrix Factorization (NMF)
nmf = NMF(n_components=3)
W = nmf.fit_transform(X)
- Performing Matrix Factorization Using Alternating Least Squares (ALS):
import numpy as np
from scipy.sparse.linalg import spsolve
# Generate sample data
X = np.random.rand(10, 5)
# Initialize factors
n_factors = 2
P = np.random.rand(X.shape[0], n_factors)
Q = np.random.rand(X.shape[1], n_factors)
# Perform Alternating Least Squares (ALS)
for _ in range(100):
for i in range(X.shape[0]):
P[i] = spsolve(np.dot(Q.T, Q), np.dot(X[i], Q))
for j in range(X.shape[1]):
Q[j] = spsolve(np.dot(P.T, P), np.dot(X[:, j].T, P))
- Calculating Cross-Entropy Loss:
import numpy as np
# Define true and predicted probabilities
true_probs = np.array([0, 1, 0, 0])
predicted_probs = np.array([0.1, 0.8, 0.05, 0.05])
# Calculate cross-entropy loss
cross_entropy = -np.sum(true_probs * np.log(predicted_probs))
- Performing Multi-Armed Bandit Simulation:
import numpy as np
# Define bandit arms and their probabilities
arms = np.array([0.1, 0.5, 0.8])
num_trials = 1000
# Perform multi-armed bandit simulation
rewards = []
for _ in range(num_trials):
chosen_arm = np.random.choice(range(len(arms)), p=arms)
reward = np.random.random() < arms[chosen_arm]
rewards.append(reward)
- Generating Random Walk:
import numpy as np
import matplotlib.pyplot as plt
# Generate random walk
num_steps = 1000
steps = np.random.choice([-1, 1], size=num_steps)
walk = np.cumsum(steps)
# Plot random walk
plt.plot(walk)
plt.xlabel('Steps')
plt.ylabel('Position')
plt.show()
- Calculating Mahalanobis Distance Between Points and a Distribution:
import numpy as np
from scipy.spatial.distance import mahalanobis
# Define distribution parameters
mean = np.array([1, 2])
covariance = np.array([[2, 0.5], [0.5, 1]])
# Generate random points
points = np.random.multivariate_normal(mean, covariance, size=100)
# Calculate Mahalanobis distance
distances = [mahalanobis(point, mean, np.linalg.inv(covariance)) for point in points]
- Performing Resampling (Bootstrap):
import numpy as np
# Generate sample data
data = np.random.normal(loc=5, scale=2, size=100)
# Perform resampling (Bootstrap)
resamples = [np.random.choice(data, size=len(data), replace=True) for _ in range(1000)]
- Performing Singular Value Thresholding (SVT):
import numpy as np
from scipy.linalg import svd
# Generate sample data
X = np.random.rand(10, 5)
# Perform Singular Value Thresholding (SVT)
U, S, VT = svd(X)
k = 3
S_thresh = np.maximum(S - k, 0)
X_svt = np.dot(U, np.dot(np.diag(S_thresh), VT))
- Calculating Jensen-Shannon Divergence:
import numpy as np
from scipy.spatial.distance import jensenshannon
# Define probability distributions
p = np.array([0.4, 0.6])
q = np.array([0.3, 0.7])
# Calculate Jensen-Shannon divergence
js_divergence = jensenshannon(p, q)
- Generating Sparse Matrix:
import numpy as np
from scipy.sparse import random
# Generate sparse matrix
sparse_matrix = random(5, 5, density=0.2, format='csr')
- Performing Stochastic Gradient Descent (SGD):
import numpy as np
from sklearn.linear_model import SGDRegressor
# Generate sample data
X = np.random.rand(100, 2)
y = np.random.rand(100)
# Perform Stochastic Gradient Descent (SGD)
sgd = SGDRegressor()
sgd.fit(X, y)
- Performing Expectation-Maximization (EM) Algorithm:
import numpy as np
from sklearn.mixture import GaussianMixture
# Generate sample data
X = np.random.rand(100, 2)
# Perform Expectation-Maximization (EM)
em = GaussianMixture(n_components=2)
em.fit(X)
- Calculating Total Variation Distance:
import numpy as np
from scipy.spatial.distance import variation
# Define probability distributions
p = np.array([0.2, 0.8])
q = np.array([0.3, 0.7])
# Calculate Total Variation distance
tv_distance = variation(p, q)
- Calculating Manhattan Distance Matrix:
import numpy as np
from scipy.spatial.distance import pdist, squareform
# Define points
points = np.array([[1, 2], [3, 4], [5, 6]])
# Calculate pairwise Manhattan distance
manhattan_distances = squareform(pdist(points, metric='cityblock'))
- Performing Locally Linear Embedding (LLE):
import numpy as np
from sklearn.manifold import LocallyLinearEmbedding
# Generate sample data
X = np.random.rand(100, 3)
# Perform Locally Linear Embedding (LLE)
lle = LocallyLinearEmbedding(n_components=2)
embedded_data = lle.fit_transform(X)
- Performing Randomized Singular Value Decomposition (SVD):
import numpy as np
from sklearn.utils.extmath import randomized_svd
# Generate sample data
X = np.random.rand(10, 5)
# Perform Randomized Singular Value Decomposition (SVD)
U, S, VT = randomized_svd(X, n_components=3)
- Performing Robust Principal Component Analysis (RPCA):
import numpy as np
from sklearn.decomposition import PCA
from scipy.linalg import svd
# Generate sample data
X = np.random.rand(100, 10)
# Perform Robust Principal Component Analysis (RPCA)
U, S, VT = svd(X)
pca = PCA(n_components=5)
pca.fit(X)
- Generating Random Permutation:
import numpy as np
# Generate random permutation
permutation = np.random.permutation(10)