ShahiLanding

A/B Testing & Experiments

ShahiLandin includes a powerful A/B testing system that allows you to create experiments, test variations of your landing pages, and identify the best-performing version based on real data.

What is A/B Testing?

A/B testing (also called split testing) is a method of comparing two versions of a landing page to determine which one performs better. Visitors are randomly assigned to see either version A or version B, and their behavior is tracked to calculate which variant has a higher conversion rate.

Benefits:

    1. Data-driven decision making
    2. Improved conversion rates
    3. Better understanding of your audience
    4. Continuous optimization
    5. Enabling A/B Testing

      Global Settings

    6. Navigate to Settings > ShahiLandin
    7. Find the Experiments section
    8. Toggle Experiments Enabled to ON
    9. Click Save Changes
    10. Default State: A/B testing is disabled by default to prevent accidental experiments.

      Requirements

      To use A/B testing, you need:

    11. Analytics enabled (to track conversions)
    12. At least 2 published landing pages
    13. The manageshahilandings capability
    14. Creating an Experiment

      Method 1: Via Dashboard

    15. Go to Landing Pages > All Landing Pages
    16. Find the landing page you want to test (Control/Original)
    17. Hover and click Create Experiment
    18. Configure experiment settings
    19. Create the variant page
    20. Launch the experiment
    21. Method 2: Via Meta Box

    22. Edit a published landing page
    23. Find the ShahiLandin Experiments meta box
    24. Click Create New Experiment
    25. Enter experiment details:
    26. Experiment Name: e.g., “Headline Test – Nov 2025”
      Variant: Choose an existing page or create a new one
      Traffic Split: Set percentage for control/variant (e.g., 50/50)
      Duration: Number of days to run the test

    27. Click Start Experiment
    28. Experiment Settings

      Control Page:

    29. The original landing page (baseline)
    30. Receives a portion of traffic based on split percentage
    31. Variant Page:

    32. The alternative version you’re testing
    33. Should differ in only ONE element for accurate results
    34. Traffic Split:

    35. 50/50: Equal traffic to both versions (most common)
    36. 75/25: More traffic to control, safer for high-value pages
    37. 90/10: Minimal risk testing, useful for radical changes
    38. Duration:

    39. Recommended: 14-30 days for statistical significance
    40. Minimum: 7 days
    41. Consider your traffic volume when setting duration
    42. Goal Tracking:

    43. Form submissions
    44. Button clicks
    45. Custom conversion events
    46. How Experiments Work

      Visitor Assignment

      When a visitor lands on a page with an active experiment:

    47. The plugin checks if they’ve been assigned before (via cookie)
    48. If new visitor, randomly assign to control or variant based on traffic split
    49. Store assignment in cookie (shahilandinvariant{post_id})
    50. Redirect to assigned variant (if not control)
    51. Cookie lasts 30 days to ensure consistent experience
    52. Example Flow:
      `
      Visitor arrives at /landing/signup
      → No assignment cookie found
      → Random selection: Variant (50% probability)
      → Cookie set: shahilandinvariant123 = 456
      → Redirect to /landing/signup-variant
      → Track view and conversions on variant page
      `

      Consistent User Experience

      Once assigned:

    53. Visitor always sees the same version
    54. Cookie persists for 30 days
    55. Even if they leave and return, they see the same variant
    56. Ensures accurate conversion tracking
    57. Data Collection

      For each variant, the plugin tracks:

    58. Views: Total visitors assigned to this variant
    59. Conversions: Goal completions (form submissions, etc.)
    60. Conversion Rate: Conversions / Views × 100
    61. Timestamp: When each event occurred
    62. Managing Active Experiments

      Viewing Experiment Status

      Check active experiments:

    63. Go to Landing Pages > Experiments
    64. See all running, completed, and paused experiments
    65. Experiment List Shows:

    66. Experiment name
    67. Control and variant pages
    68. Traffic split
    69. Current performance (views, conversions, conversion rate)
    70. Days remaining
    71. Status (Running, Paused, Completed)
    72. Monitoring Performance

      View real-time experiment results:

    73. Edit the control or variant page
    74. Find the ShahiLandin Experiments meta box
    75. Review current statistics:
    76. Control: Views, conversions, conversion rate
      Variant: Views, conversions, conversion rate
      Leader: Which version is currently winning

      Statistical Significance:

    77. The plugin calculates confidence level
    78. Displays “Statistically Significant” badge when confidence > 95%
    79. Wait for significance before declaring a winner
    80. Pausing an Experiment

      Temporarily stop an experiment:

    81. Go to Landing Pages > Experiments
    82. Find the experiment
    83. Click Pause
    84. All traffic now goes to control page
    85. When to Pause:

    86. Unexpected issues with variant
    87. Need to make changes to pages
    88. Seasonal break in traffic
    89. Resume:

    90. Click Resume to continue the experiment
    91. Assignments and data are preserved
    92. Ending an Experiment

      Complete and analyze an experiment:

    93. Go to Landing Pages > Experiments
    94. Find the completed or running experiment
    95. Click End Experiment
    96. Review final results
    97. Choose the winner:
    98. Keep Variant: Replace control with winning variant
      Keep Control: Discard variant, continue with original
      Keep Both: Maintain both pages separately

      Analyzing Experiment Results

      Key Metrics

      Conversion Rate:

    99. Most important metric for winners
    100. Formula: (Conversions / Views) × 100
    101. Example: 50 conversions from 1000 views = 5% conversion rate
    102. Confidence Level:

    103. Statistical measure of result reliability
    104. 95%+ confidence = winner is likely not due to chance
    105. Below 95% = need more data
    106. Sample Size:

    107. Minimum 100 conversions per variant recommended
    108. Larger samples = more reliable results
    109. Low traffic sites need longer test durations
    110. Winner Declaration

      The plugin automatically recommends a winner when:

    111. Minimum sample size reached (100 conversions per variant)
    112. Confidence level exceeds 95%
    113. Conversion rate difference is substantial (>5% relative improvement)
    114. Example Winning Criteria:
      `
      Control: 1000 views, 40 conversions (4.0%)
      Variant: 1000 views, 50 conversions (5.0%)
      Relative Improvement: 25%
      Confidence: 97%
      Recommendation: Variant is the winner
      `

      Exporting Results

      Export experiment data for reporting:

    115. Go to Landing Pages > Experiments
    116. Click Export next to an experiment
    117. Download CSV with detailed data:
    118. – Daily breakdown of views and conversions
      – Hourly trends
      – Visitor browser, device, referrer data

      Best Practices for A/B Testing

      Test One Element at a Time

      Good Testing:

    119. Change headline only
    120. Change CTA button color only
    121. Change hero image only
    122. Bad Testing:

    123. Change headline + button + images + layout
    124. Too many variables make results meaningless
    125. Set a Clear Hypothesis

      Before starting, define:

      Hypothesis Template:
      “Changing [ELEMENT] from [CONTROL] to [VARIANT] will improve [GOAL] because [REASON]”

      Example:
      “Changing the CTA text from ‘Submit’ to ‘Get Free Trial’ will improve form submissions because it’s more specific and value-focused”

      Allow Sufficient Time

      Minimum Duration:

    126. At least 1 business cycle (7 days for B2B, may vary)
    127. Run through weekdays and weekends
    128. Avoid holidays and special events
    129. Sufficient Traffic:

    130. Need at least 100 conversions per variant
    131. With 2% conversion rate, need 5000 visitors per variant
    132. Calculate required duration: 10,000 visits / daily traffic
    133. Avoid Testing During Outliers

      Pause experiments during:

    134. Major sales or promotions
    135. Site-wide technical issues
    136. Unusual traffic spikes (viral content, press)
    137. Seasonal events that skew behavior
    138. Document Everything

      Keep records of:

    139. Hypothesis for each test
    140. Exact changes made to variant
    141. Start and end dates
    142. Final results and winner
    143. Learnings and next steps
    144. Advanced Experiment Features

      Multi-Variant Testing (A/B/C/D)

      Test more than 2 versions:

    145. Create primary experiment (A vs B)
    146. Add additional variants:
    147. – Click Add Variant in Experiments meta box
      – Create variant C, D, etc.

    148. Set traffic split for all variants (e.g., 25/25/25/25)
    149. Note: Multi-variant tests require more traffic and longer duration.

      Segment-Based Experiments

      Target experiments to specific audiences:

      Geographic Targeting:
      `php
      // Show variant only to US visitors
      addfilter(‘shahilandinexperimentgeotarget’, function($countries, $experiment_id) {
      return [‘US’]; // Only show to US traffic
      }, 10, 2);
      `

      Device Targeting:
      `php
      // Show variant only to mobile users
      addfilter(‘shahilandinexperimentdevicetarget’, function($devices, $experiment_id) {
      return [‘mobile’]; // Only mobile gets variant
      }, 10, 2);
      `

      Scheduled Experiments

      Launch experiments at a future date:

    150. Create experiment as usual
    151. Set Start Date to future date/time
    152. Experiment automatically begins at scheduled time
    153. Use Cases:

    154. Coordinate with marketing campaigns
    155. Test during specific seasons
    156. Automate weekend vs weekday variants
    157. Automatic Winner Selection

      Enable automatic winner implementation:

    158. Go to Settings > ShahiLandin > Experiments
    159. Enable Auto-Select Winners
    160. Set confidence threshold (default: 95%)
    161. When threshold reached, variant automatically replaces control
    162. Safety Features:

    163. Notification email sent before auto-selection
    164. 24-hour grace period to review
    165. Option to override automated decision
    166. WP-CLI Commands for Experiments

      List Active Experiments

      `bash
      wp shahilandin experiments list
      `

      Shows all active experiments with status.

      Start an Experiment

      `bash
      wp shahilandin experiments start –control=123 –variant=456 –split=50/50 –duration=14
      `

      End an Experiment

      `bash
      wp shahilandin experiments end –id=789 –winner=variant
      `

      Export Experiment Data

      `bash
      wp shahilandin experiments export –id=789 –format=csv > results.csv
      `

      Experiment Settings Reference

      | Setting | Default | Description |
      |———|———|————-|
      | Experiments Enabled | false | Master switch for A/B testing |
      | Default Traffic Split | 50/50 | Default control/variant percentage |
      | Default Duration | 14 days | How long experiments run |
      | Auto-Select Winners | false | Automatically implement winning variant |
      | Confidence Threshold | 95% | Required confidence for winner declaration |
      | Minimum Sample Size | 100 | Minimum conversions before declaring winner |

      Troubleshooting Experiments

      Visitors See Inconsistent Versions

      Issue: Same visitor sees different variants on different visits

      Solution:

    167. Clear cookies and test again
    168. Check cookie settings in browser
    169. Verify cookie domain configuration
    170. Low Sample Size

      Issue: Not enough data to reach significance

      Solution:

    171. Extend experiment duration
    172. Increase traffic to landing page
    173. Lower confidence threshold (not recommended)
    174. Consider running longer tests
    175. Variant Not Loading

      Issue: All traffic goes to control, variant never loads

      Solution:

    176. Verify experiment is active (not paused)
    177. Check variant page is published
    178. Review traffic split settings
    179. Test in incognito mode
    180. Statistical Significance Not Reached

      Issue: Test runs for weeks without clear winner

      Solution:

    181. Check conversion rate difference (may be too small)
    182. Increase sample size by extending duration
    183. Consider testing a more dramatic change
    184. Review if test is powered correctly for your traffic
    185. Tips for Successful A/B Testing

    186. Start with high-impact elements: Test headlines and CTAs first
    187. Wait for significance: Don’t end tests early based on hunches
    188. Document learnings: Build a knowledge base of what works
    189. Test continuously: Always have an experiment running
    190. Respect user assignment: Don’t override cookie-based assignments
    191. Monitor for issues: Check both variants regularly for errors
    192. Celebrate wins: Implement winning variants quickly

For analytics integration with experiments, see the Analytics & Tracking article.

Share this article

Was this article helpful?

Help us improve our documentation

Still need help?

Our support team is ready to assist you with personalized guidance for your workspace.

Submit a support ticket