ShahiLanding

How to Run A/B Tests on Landing Pages

This step-by-step tutorial shows you how to create and run A/B tests (split tests) to optimize your landing page conversions.

What You’ll Learn

By the end of this tutorial, you’ll know how to:

    1. Create A/B test experiments
    2. Design test variants
    3. Configure traffic distribution
    4. Track and analyze results
    5. Declare a winning variant
    6. Time Required: 15-20 minutes
      Difficulty: Intermediate
      Prerequisites: At least one published landing page

      What is A/B Testing?

      A/B testing (or split testing) shows different versions of your landing page to visitors and measures which performs better.

      Example: Test two different headlines to see which one generates more signups.

      Why A/B Test?

    7. πŸ“ˆ Increase conversions: Find what works best
    8. πŸ’‘ Make data-driven decisions: Remove guesswork
    9. πŸ’° Improve ROI: Get more from existing traffic
    10. 🎯 Understand your audience: Learn what resonates
    11. Before You Start

      What You Need

    12. βœ… Published landing page with traffic
    13. βœ… Analytics enabled for tracking
    14. βœ… Conversion goal defined (form submit, button click, etc.)
    15. βœ… Hypothesis: What you want to test and why
    16. Recommended Minimum Traffic

      For reliable results, you need:

    17. Minimum: 100 visitors per variant
    18. Better: 500+ visitors per variant
    19. Ideal: 1,000+ visitors per variant
    20. Low traffic? Tests will take longer but still work.

      Step 1: Choose What to Test

      Good Test Ideas

      Start with elements that have the biggest impact:

      High-Impact Elements:

    21. βœ… Headline: The first thing visitors see
    22. βœ… Call-to-Action (CTA) button: Text, color, size, position
    23. βœ… Hero image: Visual appeal and relevance
    24. βœ… Form fields: Number and type of fields
    25. βœ… Social proof: Testimonials, trust badges, stats
    26. Medium-Impact Elements:

    27. βœ… Body copy: Length, tone, benefits vs features
    28. βœ… Layout: Single column vs multi-column
    29. βœ… Colors: Brand colors vs contrasting colors
    30. βœ… Pricing display: Format, emphasis, discounts
    31. For This Tutorial

      We’ll test two different headlines to see which generates more email signups.

      Hypothesis: A benefit-focused headline will convert better than a feature-focused headline.

      Control (Original): “Professional Email Marketing Software”
      Variant A: “Get 10x More Email Opens in 30 Days”

      Step 2: Access the Experiments Section

    32. Go to Landing Pages in WordPress admin
    33. Find your published landing page
    34. Hover over the landing page title
    35. Click Create Experiment
    36. Alternative path:

    37. Go to ShahiLandin β†’ Experiments
    38. Click Add New Experiment
    39. Select the landing page you want to test
    40. Step 3: Set Up Your Experiment

      You’ll see the Create Experiment screen.

      3.1 Experiment Name

      `
      Experiment Name: Headline Test – Benefits vs Features
      `

      Tips for naming:

    41. Descriptive: Explain what you’re testing
    42. Include date: “Headline Test – Nov 2024”
    43. Version numbers: “Homepage Test v3”
    44. 3.2 Description (Optional)

      `
      Description: Testing benefit-focused headline against feature-focused headline to improve email signup conversions.

      Hypothesis: Benefit-focused messaging will resonate better with visitors looking for results.
      `

      3.3 Select Parent Landing Page

      `
      Landing Page: [Dropdown menu – select your page]
      `

      Choose the landing page you want to run the test on.

      3.4 Set Conversion Goal

      Define what counts as a “win”:

      `
      Conversion Goal: Email Signup Form Submission
      Goal Value: $10 (optional – assign dollar value to conversions)
      `

      Common goals:

    45. Form submission
    46. Button click
    47. Purchase
    48. Download
    49. Scroll depth
    50. Time on page
    51. 3.5 Traffic Split

      Configure how traffic is distributed:

      `
      Traffic Allocation:
      β—‹ Control (Original): 50%
      β—‹ Variant A: 50%

      Total: 100% βœ“
      `

      Options:

    52. 50/50 split: Standard for A/B test (recommended)
    53. 33/33/33: For A/B/C test with two variants
    54. 80/20: To minimize risk, show variant to fewer visitors
    55. For this tutorial, use 50/50.

      3.6 Test Duration

      `
      Duration Settings:
      ☐ Set maximum duration
      ☐ Set minimum sample size
      `

      Optional but recommended:
      `
      β˜‘ Set maximum duration: 30 days
      β˜‘ Stop test automatically when statistical significance reached
      `

      Click “Create Experiment” to continue.

      Step 4: Create Your Variant

      After creating the experiment, you’ll be taken to the Variant Editor.

      4.1 Name Your Variant

      `
      Variant Name: Benefits Headline
      `

      4.2 Edit Variant Content

      You have two options for editing:

      Option A: Visual Editor (Easier)

    56. Click Visual Editor tab
    57. You’ll see a preview of your landing page
    58. Click on the headline to edit
    59. Type the new headline: “Get 10x More Email Opens in 30 Days”
    60. Click Save Changes
    61. Option B: HTML Editor (More Control)

    62. Click HTML tab
    63. Find the headline in the code:
    64. `html

      Professional Email Marketing Software

      `

    65. Change to:
    66. `html

      Get 10x More Email Opens in 30 Days

      `

    67. Click Save Changes
    68. 4.3 CSS Changes (If Needed)

      If you want to change styling too:

    69. Click CSS tab
    70. Add or modify styles:
    71. `css
      .hero h1 {
      color: #ff6600; / Change headline color /
      font-size: 48px; / Make it bigger /
      font-weight: bold;
      }
      `

      4.4 Preview Your Variant

    72. Click Preview button
    73. New tab opens showing Variant A
    74. Verify changes look correct
    75. Test on mobile (resize browser or use DevTools)
    76. Important: Only change the ONE element you’re testing. Don’t change multiple things at once or you won’t know what caused the difference.

      4.5 Save Variant

      Click Save Variant button when satisfied.

      Step 5: Review Experiment Settings

      Before starting the test, review your configuration:

      Experiment Summary

      `
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      Experiment: Headline Test – Benefits vs Features
      Landing Page: Email Marketing Page
      Status: Draft

      Variants:

    77. Control (Original): 50% traffic
    78. Benefits Headline: 50% traffic
    79. Conversion Goal: Email Signup Form Submission
      Goal Value: $10

      Duration: 30 days (or until statistically significant)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      `

      Pre-Launch Checklist

      Before starting, verify:

    80. βœ… Conversion tracking is set up correctly
    81. βœ… Variant shows only ONE change from control
    82. βœ… Traffic split adds up to 100%
    83. βœ… Landing page is getting adequate traffic
    84. βœ… Test duration is reasonable (2-4 weeks typically)
    85. βœ… Analytics is enabled
    86. Step 6: Start Your Experiment

      When ready to launch:

    87. Click Start Experiment button
    88. Confirm you want to start
    89. Status changes from “Draft” to “Running”
    90. Your test is now live! πŸš€

      What Happens Now?

    91. Visitors are randomly assigned to Control or Variant A
    92. Each visitor consistently sees the same version (via cookie)
    93. ShahiLandin tracks views and conversions for each variant
    94. Statistics update in real-time
    95. Immediate Actions

      Don’t:

    96. ❌ Stop the test too early
    97. ❌ Make changes to landing page during test
    98. ❌ Start other tests on same page
    99. ❌ Check results every hour
    100. Do:

    101. βœ… Let test run for at least 1-2 weeks
    102. βœ… Monitor for technical issues
    103. βœ… Continue driving traffic to the page
    104. βœ… Check weekly for statistical significance
    105. Step 7: Monitor Your Results

      View Experiment Dashboard

    106. Go to ShahiLandin β†’ Experiments
    107. Click on your experiment name
    108. View the statistics dashboard
    109. Understanding the Dashboard

      You’ll see:

      `
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      Running for: 7 days

      Control (Original)

    110. Visitors: 523
    111. Conversions: 42
    112. Conversion Rate: 8.03%
    113. Confidence: —
    114. Variant A (Benefits Headline)

    115. Visitors: 508
    116. Conversions: 61
    117. Conversions Rate: 12.01%
    118. Confidence: 95.2% βœ“
    119. Winner: Variant A
      Improvement: +49.6%
      Statistical Significance: Reached βœ“
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      `

      Key Metrics Explained

      Visitors: Number of unique people who saw this variant
      Conversions: Number who completed the goal
      Conversion Rate: Percentage who converted (conversions Γ· visitors)
      Confidence: Statistical confidence that this variant is truly better
      Improvement: How much better than control

      Statistical Significance

      What it means: How confident you can be that the difference is real, not random luck.

      Confidence Levels:

    120. < 90%: Not significant – keep testing
    121. 90-94%: Marginally significant – might work
    122. 95%+: Statistically significant – reliable winner βœ“
    123. 99%+: Highly significant – very reliable
    124. Wait for 95%+ confidence before declaring a winner.

      Step 8: Analyze the Results

      After 1-2 Weeks

      Check your experiment dashboard. You’ll likely see one of these scenarios:

      Scenario 1: Clear Winner (95%+ Confidence)

      `
      Variant A is winning with 95%+ confidence
      `

      Action: Proceed to Step 9 to declare winner

      Scenario 2: Trending But Not Significant

      `
      Variant A shows 12% conversion vs 8% control
      But confidence is only 87%
      `

      Action: Keep test running, need more traffic

      Scenario 3: No Clear Difference

      `
      Both variants performing similarly
      Control: 8.1% | Variant A: 8.3%
      Confidence: 42%
      `

      Action: Either continue testing or accept that this change doesn’t matter

      Scenario 4: Variant is Losing

      `
      Variant A: 6.2% conversion
      Control: 8.1% conversion
      Confidence: 94% that control is better
      `

      Action: Stop test, implement original (control wins)

      Step 9: Declare the Winner

      When you’ve reached statistical significance (95%+ confidence):

      9.1 Review Final Stats

      Double-check:

    125. βœ… Confidence level is 95% or higher
    126. βœ… Both variants have minimum sample size (100+ each)
    127. βœ… Test ran for at least 1 week (preferably 2)
    128. βœ… Results are consistent (not wildly fluctuating)
    129. 9.2 Declare Winner

    130. Click Declare Winner button
    131. Select the winning variant
    132. Add notes about the test (optional):
    133. `
      Notes: Benefit-focused headline improved conversions by 49.6%. Visitors respond better to outcome-based messaging. Consider applying this principle to other pages.
      `

    134. Click Confirm
    135. 9.3 What Happens After

    136. Experiment status changes to “Completed”
    137. Test stops running
    138. All traffic now goes to original landing page (no automatic changes)
    139. Results are archived for future reference
    140. Step 10: Implement the Winner

      Important: ShahiLandin doesn’t automatically update your page. You must manually implement the winning variant.

      How to Implement

    141. Go to Landing Pages
    142. Edit your landing page
    143. Update the content with the winning variant:
    144. `html

      Professional Email Marketing Software

      Get 10x More Email Opens in 30 Days

      `

    145. Click Update
    146. Clear all caches
    147. Verify changes on live page
    148. Why Manual Implementation?

      This gives you control over:

    149. When to implement
    150. How to implement
    151. Whether to implement partially
    152. Whether to test further first
    153. Advanced: Running Multiple Tests

      Sequential Testing (Recommended)

      Test one element at a time:

    154. Test 1: Headline (complete)
    155. Test 2: CTA button color
    156. Test 3: Form fields
    157. Test 4: Hero image
    158. Benefits:

    159. βœ… Know exactly what caused improvement
    160. βœ… Build cumulative improvements
    161. βœ… Learn systematically
    162. Parallel Testing (Advanced)

      Test different pages simultaneously:

    163. Landing Page A: Test headline
    164. Landing Page B: Test pricing
    165. Landing Page C: Test social proof
    166. Don’t test multiple elements on the SAME page simultaneously unless you know multivariate testing.

      Best Practices for A/B Testing

      Testing Strategy

    167. Test big changes first: Headline, CTA, hero image
    168. Test one element at a time: Isolate variables
    169. Run long enough: At least 1-2 weeks, 100+ conversions
    170. Reach significance: Wait for 95%+ confidence
    171. Document learnings: Keep notes on all tests
    172. What to Test (Priority Order)

      High Priority (Test First):

    173. Headline
    174. CTA button (text, color, size)
    175. Hero image or video
    176. Social proof (testimonials, stats)
    177. Form length (number of fields)
    178. Medium Priority:

    179. Body copy length
    180. Benefits vs features
    181. Pricing display
    182. Page layout
    183. Color scheme
    184. Low Priority:

    185. Font choices
    186. Icon styles
    187. Footer content
    188. Minor spacing tweaks
    189. Common Mistakes to Avoid

      ❌ Stopping too early: Need adequate sample size
      ❌ Testing multiple things: Can’t tell what worked
      ❌ Not waiting for significance: Random fluctuations mislead
      ❌ Ignoring mobile: Test on all devices
      ❌ Testing without hypothesis: Test with purpose
      ❌ Making changes during test: Invalidates results
      ❌ Peak Effect bias: Early results often don’t hold

      When to Stop a Test

      Stop when:

    190. βœ… Statistical significance reached (95%+)
    191. βœ… Minimum sample size met (100+ per variant)
    192. βœ… Minimum duration elapsed (1-2 weeks)
    193. βœ… Results are stable
    194. Or stop if:

    195. Maximum duration reached (4-6 weeks)
    196. Technical issues invalidate results
    197. Business requirements change
    198. Calculating Sample Size

      Need to know how long to run your test?

      Simple Formula

      `
      Sample size needed per variant =
      (Baseline conversion rate Γ— 0.5) Γ— 1000

      Example:
      If current conversion rate is 5%:
      (0.05 Γ— 0.5) Γ— 1000 = 25 Γ— 1000 = 25,000 visitors total
      = 12,500 per variant
      `

      Use Online Calculators

      Easier option: Use A/B test calculator

    199. Optimizely Calculator: https://www.optimizely.com/sample-size-calculator/
    200. VWO Calculator: https://vwo.com/ab-split-test-duration/
    201. Evan Miller’s Calculator: https://www.evanmiller.org/ab-testing/sample-size.html
    202. Input:

    203. Current conversion rate: 5%
    204. Minimum detectable effect: 20% (relative improvement)
    205. Statistical power: 80%
    206. Significance level: 95%
    207. Output: Sample size needed and estimated duration

      Troubleshooting

      Test Not Getting Traffic

      Problem: No visitors or very few

      Solutions:

    208. Verify experiment status is “Running”
    209. Check landing page is published
    210. Drive traffic through ads, email, social media
    211. Wait longer – low traffic takes time
    212. Conversions Not Tracking

      Problem: Zero conversions for both variants

      Solutions:

    213. Verify conversion goal is set up correctly
    214. Test conversion manually (submit form, click button)
    215. Check analytics is enabled
    216. Review browser console for JavaScript errors
    217. Results Not Statistically Significant

      Problem: Test running for weeks, still no significance

      Solutions:

    218. Traffic too low – need more visitors
    219. Variants too similar – make bigger changes
    220. Baseline conversion rate very high – harder to improve
    221. Continue running or declare test inconclusive
    222. One Variant Gets All Traffic

      Problem: Traffic not splitting 50/50

      Solutions:

    223. Check traffic allocation settings
    224. Verify experiment is running (not draft)
    225. Clear caches
    226. Test in different browsers/incognito mode
    227. Summary

      You’ve learned how to:

    228. βœ… Plan and set up A/B test experiments
    229. βœ… Create test variants with controlled changes
    230. βœ… Configure traffic distribution
    231. βœ… Monitor experiment progress
    232. βœ… Analyze results and reach statistical significance
    233. βœ… Declare winners and implement improvements
    234. βœ… Follow best practices for reliable testing
    235. Start testing and optimizing your landing pages today!

      Related Tutorials

    236. How to Set Up Conversion Tracking
    237. How to Analyze Landing Page Analytics
    238. How to Optimize Performance
    239. How to Create Your First Landing Page
    240. Additional Resources

    241. A/B Testing Experiments Feature Guide
    242. A/B Testing Issues Troubleshooting
    243. Analytics Tracking Feature Guide

Questions? Contact support or visit our documentation.

Share this article

Was this article helpful?

Help us improve our documentation

Still need help?

Our support team is ready to assist you with personalized guidance for your workspace.

Submit a support ticket