ShahiLandin includes a powerful A/B testing system that allows you to create experiments, test variations of your landing pages, and identify the best-performing version based on real data.
What is A/B Testing?
A/B testing (also called split testing) is a method of comparing two versions of a landing page to determine which one performs better. Visitors are randomly assigned to see either version A or version B, and their behavior is tracked to calculate which variant has a higher conversion rate.
Benefits:
- Data-driven decision making
- Improved conversion rates
- Better understanding of your audience
- Continuous optimization
- Navigate to Settings > ShahiLandin
- Find the Experiments section
- Toggle Experiments Enabled to ON
- Click Save Changes
- Analytics enabled (to track conversions)
- At least 2 published landing pages
- The
manageshahilandingscapability - Go to Landing Pages > All Landing Pages
- Find the landing page you want to test (Control/Original)
- Hover and click Create Experiment
- Configure experiment settings
- Create the variant page
- Launch the experiment
- Edit a published landing page
- Find the ShahiLandin Experiments meta box
- Click Create New Experiment
- Enter experiment details:
- Click Start Experiment
- The original landing page (baseline)
- Receives a portion of traffic based on split percentage
- The alternative version you’re testing
- Should differ in only ONE element for accurate results
- 50/50: Equal traffic to both versions (most common)
- 75/25: More traffic to control, safer for high-value pages
- 90/10: Minimal risk testing, useful for radical changes
- Recommended: 14-30 days for statistical significance
- Minimum: 7 days
- Consider your traffic volume when setting duration
- Form submissions
- Button clicks
- Custom conversion events
- The plugin checks if they’ve been assigned before (via cookie)
- If new visitor, randomly assign to control or variant based on traffic split
- Store assignment in cookie (
shahilandinvariant{post_id}) - Redirect to assigned variant (if not control)
- Cookie lasts 30 days to ensure consistent experience
- Visitor always sees the same version
- Cookie persists for 30 days
- Even if they leave and return, they see the same variant
- Ensures accurate conversion tracking
- Views: Total visitors assigned to this variant
- Conversions: Goal completions (form submissions, etc.)
- Conversion Rate: Conversions / Views × 100
- Timestamp: When each event occurred
- Go to Landing Pages > Experiments
- See all running, completed, and paused experiments
- Experiment name
- Control and variant pages
- Traffic split
- Current performance (views, conversions, conversion rate)
- Days remaining
- Status (Running, Paused, Completed)
- Edit the control or variant page
- Find the ShahiLandin Experiments meta box
- Review current statistics:
- The plugin calculates confidence level
- Displays “Statistically Significant” badge when confidence > 95%
- Wait for significance before declaring a winner
- Go to Landing Pages > Experiments
- Find the experiment
- Click Pause
- All traffic now goes to control page
- Unexpected issues with variant
- Need to make changes to pages
- Seasonal break in traffic
- Click Resume to continue the experiment
- Assignments and data are preserved
- Go to Landing Pages > Experiments
- Find the completed or running experiment
- Click End Experiment
- Review final results
- Choose the winner:
- Most important metric for winners
- Formula: (Conversions / Views) × 100
- Example: 50 conversions from 1000 views = 5% conversion rate
- Statistical measure of result reliability
- 95%+ confidence = winner is likely not due to chance
- Below 95% = need more data
- Minimum 100 conversions per variant recommended
- Larger samples = more reliable results
- Low traffic sites need longer test durations
- Minimum sample size reached (100 conversions per variant)
- Confidence level exceeds 95%
- Conversion rate difference is substantial (>5% relative improvement)
- Go to Landing Pages > Experiments
- Click Export next to an experiment
- Download CSV with detailed data:
- Change headline only
- Change CTA button color only
- Change hero image only
- Change headline + button + images + layout
- Too many variables make results meaningless
- At least 1 business cycle (7 days for B2B, may vary)
- Run through weekdays and weekends
- Avoid holidays and special events
- Need at least 100 conversions per variant
- With 2% conversion rate, need 5000 visitors per variant
- Calculate required duration: 10,000 visits / daily traffic
- Major sales or promotions
- Site-wide technical issues
- Unusual traffic spikes (viral content, press)
- Seasonal events that skew behavior
- Hypothesis for each test
- Exact changes made to variant
- Start and end dates
- Final results and winner
- Learnings and next steps
- Create primary experiment (A vs B)
- Add additional variants:
- Set traffic split for all variants (e.g., 25/25/25/25)
- Create experiment as usual
- Set Start Date to future date/time
- Experiment automatically begins at scheduled time
- Coordinate with marketing campaigns
- Test during specific seasons
- Automate weekend vs weekday variants
- Go to Settings > ShahiLandin > Experiments
- Enable Auto-Select Winners
- Set confidence threshold (default: 95%)
- When threshold reached, variant automatically replaces control
- Notification email sent before auto-selection
- 24-hour grace period to review
- Option to override automated decision
- Clear cookies and test again
- Check cookie settings in browser
- Verify cookie domain configuration
- Extend experiment duration
- Increase traffic to landing page
- Lower confidence threshold (not recommended)
- Consider running longer tests
- Verify experiment is active (not paused)
- Check variant page is published
- Review traffic split settings
- Test in incognito mode
- Check conversion rate difference (may be too small)
- Increase sample size by extending duration
- Consider testing a more dramatic change
- Review if test is powered correctly for your traffic
- Start with high-impact elements: Test headlines and CTAs first
- Wait for significance: Don’t end tests early based on hunches
- Document learnings: Build a knowledge base of what works
- Test continuously: Always have an experiment running
- Respect user assignment: Don’t override cookie-based assignments
- Monitor for issues: Check both variants regularly for errors
- Celebrate wins: Implement winning variants quickly
Enabling A/B Testing
Global Settings
Default State: A/B testing is disabled by default to prevent accidental experiments.
Requirements
To use A/B testing, you need:
Creating an Experiment
Method 1: Via Dashboard
Method 2: Via Meta Box
– Experiment Name: e.g., “Headline Test – Nov 2025”
– Variant: Choose an existing page or create a new one
– Traffic Split: Set percentage for control/variant (e.g., 50/50)
– Duration: Number of days to run the test
Experiment Settings
Control Page:
Variant Page:
Traffic Split:
Duration:
Goal Tracking:
How Experiments Work
Visitor Assignment
When a visitor lands on a page with an active experiment:
Example Flow:
`
Visitor arrives at /landing/signup
→ No assignment cookie found
→ Random selection: Variant (50% probability)
→ Cookie set: shahilandinvariant123 = 456
→ Redirect to /landing/signup-variant
→ Track view and conversions on variant page
`
Consistent User Experience
Once assigned:
Data Collection
For each variant, the plugin tracks:
Managing Active Experiments
Viewing Experiment Status
Check active experiments:
Experiment List Shows:
Monitoring Performance
View real-time experiment results:
– Control: Views, conversions, conversion rate
– Variant: Views, conversions, conversion rate
– Leader: Which version is currently winning
Statistical Significance:
Pausing an Experiment
Temporarily stop an experiment:
When to Pause:
Resume:
Ending an Experiment
Complete and analyze an experiment:
– Keep Variant: Replace control with winning variant
– Keep Control: Discard variant, continue with original
– Keep Both: Maintain both pages separately
Analyzing Experiment Results
Key Metrics
Conversion Rate:
Confidence Level:
Sample Size:
Winner Declaration
The plugin automatically recommends a winner when:
Example Winning Criteria:
`
Control: 1000 views, 40 conversions (4.0%)
Variant: 1000 views, 50 conversions (5.0%)
Relative Improvement: 25%
Confidence: 97%
Recommendation: Variant is the winner
`
Exporting Results
Export experiment data for reporting:
– Daily breakdown of views and conversions
– Hourly trends
– Visitor browser, device, referrer data
Best Practices for A/B Testing
Test One Element at a Time
Good Testing:
Bad Testing:
Set a Clear Hypothesis
Before starting, define:
Hypothesis Template:
“Changing [ELEMENT] from [CONTROL] to [VARIANT] will improve [GOAL] because [REASON]”
Example:
“Changing the CTA text from ‘Submit’ to ‘Get Free Trial’ will improve form submissions because it’s more specific and value-focused”
Allow Sufficient Time
Minimum Duration:
Sufficient Traffic:
Avoid Testing During Outliers
Pause experiments during:
Document Everything
Keep records of:
Advanced Experiment Features
Multi-Variant Testing (A/B/C/D)
Test more than 2 versions:
– Click Add Variant in Experiments meta box
– Create variant C, D, etc.
Note: Multi-variant tests require more traffic and longer duration.
Segment-Based Experiments
Target experiments to specific audiences:
Geographic Targeting:
`php
// Show variant only to US visitors
addfilter(‘shahilandinexperimentgeotarget’, function($countries, $experiment_id) {
return [‘US’]; // Only show to US traffic
}, 10, 2);
`
Device Targeting:
`php
// Show variant only to mobile users
addfilter(‘shahilandinexperimentdevicetarget’, function($devices, $experiment_id) {
return [‘mobile’]; // Only mobile gets variant
}, 10, 2);
`
Scheduled Experiments
Launch experiments at a future date:
Use Cases:
Automatic Winner Selection
Enable automatic winner implementation:
Safety Features:
WP-CLI Commands for Experiments
List Active Experiments
`bash
wp shahilandin experiments list
`
Shows all active experiments with status.
Start an Experiment
`bash
wp shahilandin experiments start –control=123 –variant=456 –split=50/50 –duration=14
`
End an Experiment
`bash
wp shahilandin experiments end –id=789 –winner=variant
`
Export Experiment Data
`bash
wp shahilandin experiments export –id=789 –format=csv > results.csv
`
Experiment Settings Reference
| Setting | Default | Description |
|———|———|————-|
| Experiments Enabled | false | Master switch for A/B testing |
| Default Traffic Split | 50/50 | Default control/variant percentage |
| Default Duration | 14 days | How long experiments run |
| Auto-Select Winners | false | Automatically implement winning variant |
| Confidence Threshold | 95% | Required confidence for winner declaration |
| Minimum Sample Size | 100 | Minimum conversions before declaring winner |
Troubleshooting Experiments
Visitors See Inconsistent Versions
Issue: Same visitor sees different variants on different visits
Solution:
Low Sample Size
Issue: Not enough data to reach significance
Solution:
Variant Not Loading
Issue: All traffic goes to control, variant never loads
Solution:
Statistical Significance Not Reached
Issue: Test runs for weeks without clear winner
Solution:
Tips for Successful A/B Testing
—
For analytics integration with experiments, see the Analytics & Tracking article.
Share this article
Still need help?
Our support team is ready to assist you with personalized guidance for your workspace.