Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but they simply aren't going to show me how to create an ingenious idea that takes things to the next level.

Of course not. There's no substitute for straight-up creativity and deep thinking.

But once you have your ingenious idea, you still have to design it, make sure it's clear to users, that they find it and can use it effectively. Your "ingenious idea" may turn out to be largely sabotaged if a button you thought had an intuitive label is misunderstood by 90% of users, or a link you thought was highly visible is being scrolled past by nearly everyone.

Yes, analytics is all about optimizing things to a local maximum. But you might not be anywhere near your local maximum. It's astonishingly easy for the first version of your ingenious idea to only be achieving 5% or 10% of the actual local maximum potential. We shouldn't downplay the difficulty or achievement involved in getting even close to a local maxima.

And you're correct that in user interviews, if you only ask what features they want, you're drastically limiting the value you might uncover. On the other hand, you'd better not ignore the features users are frequently requesting either. A lot of users are pretty smart and know exactly what they need, at least to get to that local maxima.



If you’re just using analytics to look at how effective UI designs are in making business conversions, couldn’t you still measure that by checking the backend and looking for a spike in activity towards the API endpoint that the UI invokes? Couldn’t you measure effectivity with a spike or drop in sales? I mean, good UX doesn’t so much rely on Google Analytics but on a UX engineer’s depth of knowledge about human psychology.


> couldn’t you still measure that by checking the backend and looking for a spike in activity towards the API endpoint that the UI invokes?

You could have multiple UIs hitting the same endpoint. Also, why limit yourself with such crude metrics?

> good UX doesn’t so much rely on Google Analytics but on a UX engineer’s depth of knowledge about human psychology

UX in theory, and UX in application are two different things. You could have the best models of how users will interact with your site, but until you deploy and measure, you have no idea what will happen.


You can still parameterizethe API calls if you want to attribute user activity to a specific flow, and that way you wouldn’t be “feeding the beast” that is GA.


How you mark/report the events is different from where you report them. You could use any one of self-hosted solutions on your own domain instead of GA without changing much the way you report back.


What you're suggesting is analytics.


Yes but it’s not necessarily Google analytics.


There's plenty of alternatives to Google Analytics, including open-source software you can self-host so it doesn't share your users' private data with a third party. You don't need to roll your own just to avoid GA.


For example?


https://posthog.com/ is one i've been playing around with lately.



I use https://www.goatcounter.com and recommend it.


Matomo


The article suggests some


If you are unable to find out that 90 percent of people can't use a major feature that you thought would differentiate your product, you are missing all forms of feedback, including much more important sources besides analytics.


I disagree completely because I've seen it in practice.

A button turns out to be below the fold on common small screen sizes that a new designer forgot to consider. A bad translation results in 90% of users in a particular country misunderstanding something. A JavaScript library doesn't work on a common Android phone you don't test with. Latency issues make something virtually unusable from the other side of the country because of a single badly written function that's easy to rewrite -- but you still have to catch it!

You need ALL forms of feedback. It's not a question of some being more important than others. They're all important and play their own unique roles. Analytics is for catching AND debugging all the things that go wrong at scale in the real world, as opposed to the artificial and limited environments used for user testing.


One goalpost at a time please. 90% of users in "a given country" is not the same things as saying 90 percent of all users. Unless that country is so important that it's everyone, and again you needed to test that your product is usable in the first place.

And even so, while there are a lot of countries and languages, a collection of tests in all of them, which is all it takes for these examples, is not "big data". Again you are crediting analytics for things that people were fully capable of and responsible for doing in the days without traffic data. I realize it is great for the resume and sounds sexier to do "analytics" as opposed to just basic software testing (you should see what people are calling "AI" in other industries these days). And I'm certainly happy to agree that analytics has some value, say in improving wording and stuff that a single instance won't tell you, but again these aren't cases of that.


If you see in your backend metrics, that a button isn't clicked by a huge number of visitors you can start there and analyze. Common screen sizes are helpful, but more helpful is to look at the page since maybe other optimisations can be made, if that is the important button.

Similarly with languages, that you cans er from backend metrics.

More data seems always nice, but analyzing more data isn't making analysis simpler and there is a big privacy impact in analytics, especially when outsourcing to data collectors, who can gather data across sites. (Which then also gives Google information which services are interesting to user and can be integrated into search etc.)


> if you are unable to find out that 90 percent of people can't use a major feature

How would the users know they can't use a feature they don't know it exists?

Let's say you add a brilliant new feature X, but due to a bug the users can't load the code for that feature so they never see it. How would they know to submit a form feedback for a feature they don't know it's there?


Because when you're talking to them directly you ask them about it. This is what I mean by all forms of feedback.

I'm getting the impression that people are just ignoring all advice about communicating with customers in their startups and just throwing stuff out there to see what sticks. Besides being wasteful in doing stuff no one wants anyway, what if that bad feature crippled your product and your paying customers have permanently switched to a competitor the instant your change frustrated them? And now you're bankrupt and can't afford analytics. Relying on analytics as a crutch to catch these things was the mistake in the first place.

Fun story I actually had forgotten about till now: I briefly worked at a tiny startup out of my school in the ending days of the internet bubble, trying to sell a "data mining" software product. Way too soon before it was cool, sadly. It was really hard to make the case that people needed to pay us $100k for the benefits we could get from their data. And companies certainly weren't going to go for a pitch like "if you launch a broken product we will catch that fact". They spend a lot of money to be sure that doesn't happen already. We even had one major customer figure out that they could just have an engineer perform a simple counter over incoming communications that would catch all they needed to know, and hence they didn't need our product anymore. That was kind of the end for us, in fact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: