I wish they wouldn't use non-word characters in the names of their products. I thought ".Net" was bad enough when it came out - then C# and F#. But I don't even know how to type this. On this page: http://research.microsoft.com/en-us/um/cambridge/projects/co... they refer to it as Cw, Cω, and Comega. The artist formerly known as incomprehensible.
"Cω is an experimental research language. There are no plans to turn it into a commercial language supported by Microsoft. It is not supported by either the C# or the Visual Studio teams. There are no plans to integrate it into any product."
I guess people working in research enjoy the leeway of being able to name their projects any way they want.
If you search for either of those terms in Google or Bing you find the appropriate resources right away. People love to regurgitate that these names are not searchable but somehow we've found a way. Cω isn't typeable, but that's a different issue.
There are issues if the stubs you use in your URLs only allow a-z and 0-9. You could just as easily escape each character in the stub and preserve the title of the post/question you're representing. This is a problem with Stack Overflow though, not the name of the language (C#).
Wikipedia has this issue solved. You can use any Unicode character in their titles and they get used as the stub by escaping those characters. Anyway, it's not like the whole world only uses the English alphabet. This should be handled properly.
It’s hard enough to persuade people to type curly quotes and proper accents even when it’s easy to do (e.g. on OS X). Chances of them learning to type a ♯? Minimal.
The number sign (pound sign, octothorpe, etc) separates the fragment identifier from the URL. Browsers don't send characters after the # to the server. It works with Twitter because they use Javascript to AJAX in data from additional requests. http://en.wikipedia.org/wiki/Fragment_identifier
not really, the server can't see that # in the URL, it doesn't get sent by the browser. The only way they could do that is if they switched to #! style ajax urls ala twitter
$ perl -E'say "ω is " . "not "x("ω" !~ /\w/) . "a word"'
ω is not a word
$ perl -Mutf8 -E'say "ω is " . "not "x("ω" !~ /\w/) . "a word"'
Wide character in print at -e line 1.
ω is a word
You can get rid of the "wide character" warning with the -CS switch.
As you demonstrate correctly, you need to tell Perl about the encoding of your script (and thus the literals in your script). `use utf8;` or at the command line `-Mutf8` does that.
It could be worse. Have you ever tried to get help on Pages or Numbers from a search engine? At least stuff with funky characters are unique and if correctly quoted are handled by the major search engines.
For example, searching for negative numbers. No amount of escaping or encoding will force any of the major SEs to read -17 as anything but an exlusion of 17 from the results.
Knuth's typesetting system Teχ which is rendered as TeX in Ascii and which Knuth wanted people to pronounce "Tech" being an early instance of this regrettable trend.
I loved Cw back in 2004. I even wrote an article on it http://jcooney.net/post/2004/10/06/OMG-Ccf89-is-Awesome!-You... unfortunately after several blog and hosting platform migrations it got lost. "retail" C# has mostly caught up on many of the cool features of Cw.
You know what, you're right (and if I may add, on both counts ;-).
The wikipedia page I referenced, loosely summarized, is referring to an ordinal number with the cardinality aleph-1. The plain omega that I should have referenced has cardinality aleph-0.
Unfortunately, the ordinal I wanted does not have its own wikipedia page. WTF? Merely countable is not notable enough?
Some research project about a new language and the top comments are about how weird its name is and how the character looks like a butt? I'm speechless.
This is interesting - I always took the name to mean "the last" as in the last in the line of C programming languages you'll ever need, but it had some interesting null semantics too. I remember at the time when I was looking into Cw thinking the way it handled nulls was better than C#/Java etc in this regard - Cw returns (or should it be returned for this mostly-retired language) NULL for properties of nullable types that are themselves null. So you could go if (Foo.Bar.Baz == 1) and not get a nullreferenceexception if Foo or Bar were null. This seemed like an elegant way around the 'billion dollar mistake' of null references, and all that horrible null checking code you usually have to write if you're paranoid. (Note - all of this is getting paged out of my 2004-ish programming brain).
I can't help but read that little Omega as an angular velocity. C would then be it's inertia moment, and the product the angular moment. Quite ironic since it seems the language has a hard time gaining momentum.
This is compelling for a variety of reasons, but they're the same reasons i find Scala is compelling. But Scala has a syntax that is terser and less bracket-y (so far as i have seen), and is further a long as a language.
I'm curious how they see themselves relative to the capabilities of languages like Scala.
I look at those two symbols, and in my head I hear "cow". I know there's only a C and a W, but my brain doesn't care. Maybe it's 'cause the W is kinda roundish? Anyways. "Cow".
My former ancient Greek teacher would want me to point out that this is a lowercase Omega (=a long "o" in Greek). So yes, it even sounds kinda like the first part of cow! ;-)
To me, the most interesting thing about these comments are that almost nobody knows anything about greek nowadays. Honestly, people, you should at least recognize this from your experience at Uni at some point...
What's really the most interesting thing is that I don't think but a few people have mentioned anything other than the name in their comments resulting in a meaningless discussion about what could be something quite interesting.
This Cω/LINQ concept of unifying XML and object data access with SQL- and XPath-like syntax is elegant and obvious (aka intuitive).
Does it get much usage?
Most XML access is delegated to tools these days, and SQL is so familiar (and type-based access so unfamiliar), that it seems unlikely to make inroads on either - nor offer substantial practical benefits. That is: no order-of-magnitude benefit to overcome barriers to adoption.
Have you personally found LINQ beneficial? Has it been widely adopted? Why/why not, do you think?
"LINQ to Objects" (i.e. the set of standard query operators, basically map/filter/fold and sundries) is hugely beneficial and ubiquitous in the recent codebases I've worked on. LINQ to SQL and similar IQueryable providers can be useful and are used, but unfortunately still suffer from the fact that querying is just one part of an ORM solution, and the state of ORMs in general and Microsoft's in particular is still kind of a mess. I haven't seen huge resistance to adopting them though - using some kind of LINQ-based ORM is pretty standard in ASP.NET MVC projects.
My main concern for this language would be (not that using lower case omega in conjuction with the letter C advertises looking at one's breasts or ass) that there is no room for specifics. It says it takes all these great qualities and generalizes them. Other than that, that brief overview makes it sound like something worth looking into.
It is interesting that 16 hours after the post only 4 (or so) comments on this page has anything to the language itself. The rest, including the highest voted threads, are about the choice of the name :)
Wow! The most interesting thing I found in this was the "chords" idea of concurrent programming. Check it out. I've never seen anything like it -- has this been thought about?
Without further qualification, "better" and "worse" are usually pretty meaningless words when discussing something like programming languages. I've only read the first few paragraphs on the page so far, but it seems reasonable to guess it is very unlike Python, at least.
At the risk of being snarky, that's kind of like asking "is this new rotary sander better than my salad spinner?" :) As you learn more languages you'll find that different languages are different tools for different purposes. Very rarely can one be objectively ranked unqualifiedly "better" than another. The more appropriate measure almost always is, "is this language more suited to task X than this other language?"
Learning to program with a language as academic as this would be like learning about the basics of electromagnetism by building a particle accelerator :)
>like learning about the basics of electromagnetism by building a particle accelerator
I think you mean "using" and in which case I assume you're saying it would be the only way to do it effectively but that the results might be hard to apply in practice and no one else has managed to do it so far?
If you meant "building" then I'd have to say you meant "you wouldn't learn anything as it's about applying knowledge, an engineering task, and not about learning.
Perhaps I misunderstood, it's late and I have had a drink ...
No. It is (was, actually) a research project, not a widely available general-purpose language, so python wins on most measures of being-something-ness, which I think is a pre-requite for being better.
Some of the ideas in it have already fed into c#, especially into LINQ. That was its job. Now, you can compare Python to C# 3.5 or later with LINQ, but that's a different kettle of subjectivity :P
Sorry for the juvenile humor here, but this looks like I would have to pronounce it "C-nutz". Glad there are no release plans to send this into the wild.
Like my advice for any programming language I think they should show their source for "Hello World" on the home page. Code snippets are worth a thousand words. Yields a good quick thumbnail feel for the language.