Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ruby 2.4.0 Released (ruby-lang.org)
256 points by sply on Dec 25, 2016 | hide | past | favorite | 105 comments


Compared to Rust, Go. The list of improvement for a yearly Ruby version is rather small. Some will argue Rust and Go are relatively new, but there are actually LOTS of low level performance improvement done within a year time frame.

Even if you compared to Python 3, the improvement is still rather small.

I know there is always the argument about the lack of resources, and Rust is being backed by Mozilla while Go is being backed by Google. But what about Python?

Exactly how much money is Mozilla spending on Rust, paying how many developers full time? Apple JS Core team is incredibly small, but they manage a B3 JIT compiler with 2 people.

How much money is needed? Matz and Tenderlove both mention they need more resources. Surely we can fund rise? With lots of successful companies using Ruby on Rails. Basecamp, Twitter, Groupon, Github, Gitlab etc. Or Ruby communities lack the compiler / JIT expert, can we hired some?

I am not trying to downplay the Ruby improvement. But I think Ruby needs more contributors. Moving to GitHub to get a higher exposure has been banned due to GitHub not being an opensoure product. But they also dislike hosting on Gitlab for some strange reason.


Just with the python/ruby comparison

I kinda feel like the reason for that is more due to the fact that if python users want change in their language, they really need to either construct their own pre-parser and deal with a lot of potential issues, or post a change, hope it becomes a PEP, and hope it gets accepted. There is very little ways other than that

In ruby, you are enabled to fix so much more of the language to your own liking, so there is much less of a need for the main language to change.

I'm not saying that ruby isn't underfunded, just that it doesn't need to grow as fast as python does


> In ruby, you are enabled to fix so much more of the language to your own liking, so there is much less of a need for the main language to change.

very true. lots of metaprogramming facilities in Ruby that just don't exist in Python. To further underscore your point, take a look at how much of Rails code is devoted to making Ruby more usable/consistent and has a only tangential relationship to web development.


I would love to help hosting Ruby on GitLab. If it needs to be 100% OSS we can maintain a GitLab CE instance. I heard that sequential numbering of commits is also on the wishlist.


I feel that GitLab is the natural home for ruby nowadays. I hope Matz takes you up on the offer sytse.


Thanks Marc!


What do you mean moving to GitHub has been banned? This isn't it? https://github.com/ruby/ruby


There are core contributors who would no longer participate because they refuse to use proprietary systems, even remote ones, and do not even run JavaScript. Matz prefers overall inclusivity to losing the contributors, so a wholesale move to GitHub (instead of RubyMine) is not going to happen.


I'm pretty sure that is a mirror

https://www.ruby-lang.org/en/community/ruby-core/ suggests that the main repo is a self-hosted SVN


there are lots of small improvement in ruby too, they are just not on this release notice.

Check the NEWS[0] file for a bunch more.

And Ruby 2.4 is about 10% faster than 2.3.3 according to some non-micro benchmarks[1].

Re funding: AFAIU the problem for Ruby seems to be the paucity of long term sponsorship deals, rather than one-off funding. Python has a bunch of those[2].

The lack of an important "ruby foundation" might be an issue there.

[0] https://github.com/ruby/ruby/blob/v2_4_0/NEWS

[1] https://gettalong.org/blog/2016/ruby24-performance-looking-g...

[2] https://www.python.org/psf/sponsorship/sponsors/


Python 2 to Python 3 was a disaster. Matz has made it clear in his speeches that he wont do the same mistake in Ruby. Performance and better memory management will hopefully come in the upcoming 2.5, 2.6 and 3.0 releases. What I want from all those changes is that all my current projects should just work.


Funny, I thought it had happened before in the ruby world with the switch from ruby 1.8 to 1.9 and the pythonists simply followed suit.


That's the obvious comparison. But in the Ruby world, 1.8 has long been forgotten. I don't know anyone still using 1.8, and we're all better off.

In python, on the other hand, years later it's still a shit show. The community still hasn't embraced one or the other. Some people still refuse to upgrade to 3.x. Everything is splintered. It's especially hurtful to newcomers.


I don't have any experience in compilers or JIT but developments in this for Ruby would be amazing. I know lots of Ruby devs who have to learn other languages because Ruby is insuitable for machine learning, matrix calculations etc.


There is actually JRuby, and Truffle + Graal implementation which is VERY fast. The problem is no one is using it. At least not mainstream site. None of the public Ruby Rails site uses JRuby.

I dont mean to be rude but Ruby communities, and within the Core team lacks expertise in Compiler / JIT programming.


Rubyists have never been terribly concerned with performance. Ruby isn't a language you use for optimal performance. You use it for other reasons.


https://github.com/smarr/are-we-fast-yet

It is not about optimal performance. Ruby, or specifically MRI here, is anywhere between 20-100x slower. And there are other dynamic languages that is much faster then Ruby. If Ruby is only 2-10x slower no one would complain. But right now it is so far off the chart that is why you see a shrinking usage of Ruby.


There's "Ruby 3x3" plan to increase performance by 10x - https://blog.heroku.com/ruby-3-by-3

So when the plan will be realized, it would be these 2-10x slower than others.


I see a lot of people misinterpreted 3x3 as 9. Meaning Ruby 3.0 is going to be 9 times faster. ( I am guess that is where your number 10x comes from )

The article clear states that, as well as numerous other sources Ruby 3x3 means Ruby 3.0 will be 3 times faster. The problem is it will be 3 times faster then 2.0, and we already have 30-40% speedup from 2.0. So for Ruby 3.0 is likely only 2.2x faster then Ruby 2.4.

2.2x, in the grand view of things, is still very slow.


Performance related:

  * Hash improvements via better locality for modern CPUs
  * #max and #min without temporary array
  * Speed up instance variable access


Regexes also had some changes which have improved our results on regex-heavy benchmarks by as much as 3x.


Nice! That's good to know.


I still love you, Ruby. I wish we spent more time together.


Soo, so true.

Working with Tensorflow, python is unfortunately the only realistic option. I'm getting the hang of it, but I'd still love to use ruby for it – it's all just data wrangling, it doesn't even need to be fast, and ruby code is just beautiful (to my eyes, I know it's a matter of opinion).

I've also only recently discovered the beauty that is rake. Especially for data pipelines it's a fantastic tool. Hit a bug? No need to rerun everything – it'll pick up at the exact step that failed. One of ten data files changed? It knows exactly what needs to be redone. Etc.



Make does the same thing, but boy is rake syntax nicer.


I was madly in love with Ruby in 2005 but now, Ruby feels like this ex girlfriend you were completely infatuated with and eventually broke up with because you were too young to pay attention to her flaws.


I appreciate that you were responding to the analogy the parent comment offered, but without concrete criticisms it comes across to me as a sort of haughtiness and an insult to those who haven't caught up or are too blind to have dumped Ruby by now. That seems especially out of place in a thread about a new version of the language.


Maybe you were actually too young to know how good it really was. :P


I'm very likely older than you are, so... no.

It was very good at the time. Not so much today. We've learned a lot in language design and Ruby feels very antiquated (mostly because it's dynamically typed).


What have we learned in language design that wasn't known prior to Ruby? It's not like static vs dynamic hasn't existed in languages since the 50s to early 60s. Alan Kay, in creating Smalltalk (granted in the 70s), was motivated by the limitations in static type systems. If you think a language like Haskell has solved that limitation by providing an extensible form of static typing, fine, but it existed before Ruby (1990). Elm isn't breaking new ground on this front.


Which statically typed language do you feel offers the same level of developer comfort as Ruby?

My favorite statically typed languages are Haskell, Rust and C# but I prefer Ruby over them whenever possible.


> Which statically typed language do you feel offers the same level of developer comfort as Ruby?

So far, Kotlin has hit that sweet spot for me.


Try Crystal, her younger sister. Just as beautiful but more nerdy and fast thinking.


Can we stop using women as a metaphor now? Especially if beauty is the first criterium and she's apparently just waiting to be "tried"? Thx


What have we learned? Does LISP feel antiquated to you as well? How about Java?


To me, Ruby feels like this ex-girlfiend I was completely infatuated with and eventually broke up with because I was too young to pay attention to the fact that her flaws were in all the right places instead of the ones that truly matter.


Since I never picked up ruby (I joined Python and now I'm about to pick up ruby). Curious what made you move away and what you've moved to?


Can't speak for GP but for me the kicker was that you cannot get any guarantees on the mutability of an object, and the unexpected quirks that could result from it. Example:

https://redmine.ruby-lang.org/issues/6037

You can (must) be pragmatic, of course; and assume that all will go well. But under the hood, it means a great many objects get pointlessly allocated and reallocated - or not, sometimes, with hard to find bugs occurring when a gem author overoptimizes their code to avoid making a few object copies, and ends up mutating your object's properties without you realizing it.

Things have improved since, with e.g. strings being frozen by default if you want them to be since a few versions ago. Even in this changelog, there's a least one point related to memory allocation that touches the performance consequences of needlessly allocating new objects.

Back then, I found Obj-C (and Swift, maybe?) more conforting in this respect, with most things being immutable unless you explicitly requested the mutable version. And I liked ARC a lot. (I haven't programmed much in recent years, so can't say what I'd use today if I were neck deep into code. Probably Swift.)

Another issue was that Ruby was then attracting a lot of end-users that had no formal software engineering skills. Which is fine for day to day tasks; not so much when they start distributing libraries. (Always vet your gems' authors and source code.)

I still love Ruby, mind you. It's beautifully expressive and it's still my preferred language for non-trivial "glue" tasks.


"Accidental" mutability in gems tends to cause pain - spotted this one in Rails, for example.

https://github.com/rails/rails/pull/25735


It's dynamically typed, which is a deal breaker for me today. I need not just a type system but one that supports parametric polymorphism, so definitely not Ruby, Python or even Go.


Statically typed is my deal breaker. See how varied people are? :-)


You can use traits instead. I know it's different, but it probably solves your problem.


Traits are orthogonal to the dynamic/static question.

My experience has taught me that not having type annotations in a source code makes it very hard to understand and maintain that source in the long run.


Ruby is more than flexible enough for you to add type annotation via DSLs, and use them to drive tests/fuzzing if you want that. There are a lot of people who have done that.

They rarely see much use, though, because most of us quickly experience that the type annotations that seem essential when we work on statically typed languages quickly become less essential once you adopt a more idiomatic Ruby style.

E.g. if you want to output something as text, rather than dictate what should be passed in, call #to_s on it (and optionally handle the failure if it doesn't implement #to_s, if it is reasonable to continue).

Once you adopt the attitude of asking for what you actualy need rather than demanding the client pass in what you think should be passed in, type annotations start feeling like a clamp around your foot rather than a necessity in most cases, and most of the remaining assistance they could provide tends to fall away with test cases you would require in either case.

I used to be a static typing zealot, and I still want things to be "as static as possible", but I've come to accept that most of the time the burdens it adds are not worth the benefits.

I'm sure we can do better, and there are certainly cases where optional static type annotation could be helpful, but at this point I'm not giving up the expressiveness dynamc typing gives me - a static system would need to be practically "invisible" for me to find it acceptable.


My experience has taught me that if static typing improves your code, it's probably hiding deeper flaws.

I won't deny there are some clever things one can do to improve code with sophisticated typing. However, the typical usage is to alleviate the mistakes of bad names and poor design. When I find type errors at runtime, I look for a way to refactor that avoids confusion without needing to add type checking. I don't always succeed, but when I do the code is more elegant.


Pretty cool to see a binding.pry analog moved into the standard library.


Could binding.pry/binding.irb be shortened at all? It's a mouthful.


Sure.

    require 'pry'
    module Kernel
      def debug; binding.pry; end
    end
    
    x = "hi"
    debug


Won't the binding then be your monkey patched debug method on kernel instead of where you called it from?


Yes.

    irb(main):008:0> x = "1"
    => "1"
    irb(main):009:0> debug
    [1] pry(main)> x
    NameError: undefined local variable or method `x' for main:Object
    from (pry):1:in `debug'
If you really want to do this, you can use binding_of_caller[1] to create binding object (`Kernel.binding`) up in the call stack:

    require 'binding_of_caller'
    require 'pry'
    
    module Kernel
      def debug
        binding.of_caller(1).pry
      end
    end
Then:

    irb(main):008:0> x = "1"
    => "1"
    irb(main):009:0> debug
    [1] pry(main)> x
    => "1"
[1]: https://github.com/banister/binding_of_caller


I was imagining more at the level of the standard library or the pry library.

Or do you recommend this monkeypatch in practice?


That's an editor problem.


Interesting choice to remove `tk` from the stdlib. It always felt a little neglected in terms of documentation and maintenance, but it was nice to have a simple graphical toolkit guaranteed with the runtime.


I tried to use it, settled on Shoes instead. Tk isn't free, you still have to install OS dependencies, and it's API wasn't very Rubyish.

You'd think Ruby would be swimming in easy-to-use widget libraries. But it still isn't. Shoes fits most of my needs, though I'm still working on my workflows and tooling. It's annoying that I can't just write to the console or drop in a pry session, but I'm slowly figuring it out.


Yeah, Tk certainly isn't a pleasure to work with in Ruby. Maybe my experience was colored by the fact that it always worked out-of-the-box on my desktop (I guess the libraries were preinstalled).

I've looked at Shoes multiple times for various projects and it always looks terrific, until I remember that installing it involves downloading a 64-bit binary from a website. I've bitten the bullet before and done it, but it's an (ugly) step backwards compared to `gem` and the Qt/Gtk+ bindings that can be installed via `gem`.


Yeah, shoes :( Such potential. I could never quite get it updated to Ruby 1.9. Sigh.


It looks like recently, there has been a convergence of hashtable implementations across a number of programming languages. Details differ, but the general structure is now to use two arrays, one for storing the data in insertion order (thus maintaining order without a doubly linked list), and another array serving as a hash, which contains indexes into the data array. This general idea works both with chaining and open addressing. In the last couple of years PHP (first HHVM then Zend), then Python (first PyPy then CPython), then Ruby (MRI) have switched to using this layout. Interestingly, Python has previously not made guarantees about order of hashtable elements, so this layout is advantageous even if maintaining insertion order is not a hard design constraint.


How do they remove elements from the first array? Suck it and do O(N), or is it storing (value, deleted?) pairs?


Look at the index as given by the hash?


Oh, so as you iterate the array, look up the value in the hash and if it's not there skip it?

The downside is you end up accumulating dead values.


It also helps with cache locality.


Why was BigInt and FixNum unified into Integer? Don't they already both subclass Integer?

It seems they just made it harder to write interfaces that take advantage of machine precision integers. They didn't remove either arbitrary precision integers or machine precision integers, they just made them harder to distinguish. Why is this an improvement?


because the existence of Bignum and Fixnum is effectively an implementation detail, Float is now implemented in a similar way for example, without having corresponding Flonum and Bigfloat.

There isn't (at least I've never seen it) any reason to distinguish between the two, Fixnum wasn't the proper tool to do machine-precision integer operations either (31 bit).


But isn't hiding implementation details best done by using an interface (like the Integer superclass) rather than some flag somewhere with branching logic? Fixnum _is_ useful for C extensions to quickly convert to `int`, but now I suppose you don't know whether an Integer is cast-able to an `int` unless you inspect this flag in a new custom way.


Looks like it breaks gem install json -v '1.8.3' - it does however work with json 2.0.2


And it's likely the static vs dynamic programming debate has been around longer than you coding ;).


We detached this subthread from https://news.ycombinator.com/item?id=13252935 and marked it off-topic.


[flagged]


Comparing dynamic and static programming to creationism and evolution is passive-aggressively condescending and only weakens your credibility


Looking at it as objectively as you can, what advantages does dynamic typing have over static typing?

The only potential candidate I can think of is 'more flexibility'.

However, undefined behaviour is not a desirable trait when designing programs, and languages with static types have ways to provide polymorphic functions without unhandled behaviour (such as pattern matching on function arguments).

Some may argue that dynamic languages are more readable, but there are languages with static types that are both concise and readable (Elm being a good example), so I wouldn't class that as a benefit.

Some may argue that the speed of prototyping a solution is a benefit, but the time saved putting together a prototype is often negated by the time spent debugging as the prototype matures.

So what advantages am I missing? There must be something that makes dynamic languages popular. What reasons are there to use a dynamically-typed language over a statically-typed one?


Static types limit the expressivity of your code to what the type system is able to prove. Depending on what you're trying to do, you can spend more time arguing with the type system than getting things done.

This really starts happening with a vengeance when you do increasingly lispy things, like creating dsls that move the language closer to your problem domain by writing programs that write programs.

Consider e.g. the way activerecord introspects the database schema to enrich your model declarations without further work from you.

Now there are ways to get similar effects in statically typed systems, but it's much more work, particularly if the dynamism comes from the execution environment (rather than compilation).


Okay, can you give me one example of a useful macro that you'd write in a dynamically typed language. I'd like to see what challenges there are recreating it in a statically-typed language.

Also, regarding activerecord, it seems you're hinting at the benefits you get from composibility, is that correct?


For me, the benchmark is parsing a heterogenous data structure in JSON. For example, an array of inventory items. We get around this by shoehorning them all into a common structure, but in a dynamic language with a dynamic datastore we can store them and access them in a more native (to the problemspace) manner.


Okay, so what problems do you see in using something like active patterns in F# for either parsing an array or a single value within a JSON structure?

https://docs.microsoft.com/en-us/dotnet/articles/fsharp/lang...


That looks interesting. Is there an analogous Haskell or OCaml feature to F#'s active patterns?

Regardless, I still think it's a moot point. I understand that static typing can be great for catching bugs sooner rather than later. But doing that takes time and work. Basically the thing that all static typing advocates neglect is that sometimes I just don't want to put that time and work in up front.

Most people would agree they'd want to put that work in before shipping their software to millions of people, but most code is only used by a few people a few times. The line between development and production is blurry. Most of the time, I would much rather run code that mostly works _now_ and has tons of bugs than have to put in more work. Even if it's not much more work, it's still not nothing. I own my computer and tell it what to do. But a compiler rejecting my code b/c there _might_ be an edge case that has an error is unacceptable. Which is why I would love static typing if you could simply turn it off. I know of some research in gradual typing, but I've never seen it in any mainstream languages.


>"That looks interesting. Is there an analogous Haskell or OCaml feature to F#'s active patterns?"

Based on a quick web search, the equivalent of F#'s active patterns in Haskell appears to be view patterns...

https://ghc.haskell.org/trac/ghc/wiki/ViewPatterns

...and the closest I found for OCaml was polymorphic variants...

https://realworldocaml.org/v1/en/html/variants.html

>"I know of some research in gradual typing, but I've never seen it in any mainstream languages."

When you say gradual typing, do you mean optional type hinting like you can find in Python 3, or something else?

https://docs.python.org/3/library/typing.html


I believe the term is gradual typing: https://en.wikipedia.org/wiki/Gradual_typing

Thanks for the links. I didn't know about python's typing. It looks like a fairly recent addition.


> but there are languages with static types that are both concise and readable (Elm being a good example)

Elm's to me embodies everything that drove me to Ruby over functional languages: Terseness in what to me is all the wrong places, coupled with too verbose typing.

> So what advantages am I missing?

The ones you have glossed over: More flexibility, and ability to be terse while readable. They matter more to many of us than you might think.

What finally sold me on Ruby was when I as an experiment rewrote a piece of queueing middleware we were using from C to Ruby and added significant number of features while cutting the number of lines to 10% of the original. Maybe I could achieve similar compact code with a statically typed language, but the likely candidates at the time at least were either extremely verbose or languages I considered absolutely unreadable (Haskell being top of my list of offenders - most of these languages have syntax that is clearly designed by people who are inspired by maths, unaware or not caring that this will push away most people).

I would love a "more static" Ruby, but I would not be willing to lose the terseness or expressiveness or readability to gain it. Maybe Crystal will get the balance right over time, though personally I believe you can get a lot more performance out of Ruby too without a lot of the sacrifices Crystal is making (but it will take a lot of work).


>"Elm's to me embodies everything that drove me to Ruby over functional languages: Terseness in what to me is all the wrong places, coupled with too verbose typing."

Elm can use type inference to work out types. The difference between this and dynamic languages is that it does it at compile time so you pay no runtime overhead.

Don't believe me? Take a look for yourself...

https://guide.elm-lang.org/types/


> You're either joking or wildly misinformed.

No, I'm speaking from having read a bunch of Elm code.

> Elm can use type inference to work out types. The difference between this and dynamic languages is that it does it at compile time so you pay no runtime overhead.

I know that. It does not change what I wrote as the outcome is that Elm code still includes more type annotations than I'm willing to deal with.


> "It does not change what I wrote as the outcome is that Elm code still includes more type annotations than I'm willing to deal with."

To get a better idea of how much type information you find unacceptable, do you have any problems with typed.rb code?

https://github.com/antoniogarrote/typed.rb


Yes. There's a reason you practically never find people using those things in real Ruby projects, despite the huge number of such libraries that exist.


I think the real reason might be lack of decent tooling, and developer culture.

Most attempts to add static types to ruby code don't come with complementary tools to give us some of the advantages that would immediately gain developer support. In your editor, for example, code completion based on type annotations would be a huge plus, but I don't know of any tools that give me that in ruby (if they exist, I'm unaware). In most cases, your code will still run regardless, unless you use the separate tool to type-check, and are disciplined about its feedback. It's unlikely that all your team will _always_ run the type checker, and although you can have it configured to run the type checker on every save just like you'd do with tests, it's an inconvenience.

In terms of culture, I'm mainly thinking of Rails here, but I can think of more than a few ruby projects/libraries as well. Ruby is, simply put, a dynamic language. Even if you use typed.rb, you won't get much information about the libraries you'll be using. Code completion is mostly based off comments/documentation and only in some editors, and in many cases might not even be there. I also feel that many ruby developers simply don't like types, and that's the end of the story. I'm convinced that most don't see the advantages. For instance, in teams where we've added Rubocop to simply lint our codebases, I've noticed developers complain about warnings and errors that the linter reports, whenever the developer thinks they know better. I imagine the same happening with type annotations in ruby codebases. It would become a task that you run before committing or before a merge request (like tests, in some teams). When they're confronted with a bunch of errors, they'll go into flight mode: "but i've tested this manually and it works, why is this linter complaining, and why is my type checker complaining as well?". Needless to say, the type checker might have just flagged a _potential_ bug for a use case you might not have manually tested yet.

edit: I think tooling can help shift the developer culture aspect. Better tooling provides a better developer experience and in the end that's all we want. Flow and TypeScript are perfect examples (which I adore).


Ruby is dominated by Rails, but Rails sidesteps the issues of undefined state by 'convention over configuration'.

As for other Ruby projects, static typing would help with performance issues, why developers choose not to add types when hitting performance bottlenecks is not something I fully understand, perhaps it's just seen as normal to rely on C modules to increase performance. That said, Graal/Truffle does seem to offer hope that Ruby performance can be greatly improved.


I understand that there are indeed cases where statically typed languages make a lot of sense. But anecdotally, I use ruby for web apps, and honestly in millions of lines I have written, I have rarely run into issues because the language is dynamically typed. Is it because I am so familiar with this style? I mean it isn't just me, other ruby (and previously perl) developers I've chatted with just don't have issues like I commonly hear described.


> Some may argue that the speed of prototyping a solution is a benefit, but the time saved putting together a prototype is often negated by the time spent debugging as the prototype matures.

Most prototypes fail to gain traction and are discarded well before they mature and maintenance costs start rising.


I'm not sure there's an objective truth in programming language comparison. There's just the right tool for the right job. I love ruby because it's a joy to write, that doesn't mean you can or should use it for everything.


It's a parallel, not a comparison.


Not sure why they decided to release on Christmas eve / day, but looks good.


It's defined in their (somewhat oddball) version policy: "MINOR: increased every christmas, may be API incompatible"[1]

[1]: https://www.ruby-lang.org/en/news/2013/12/21/ruby-version-po...


Because tradition. Most of the major ruby version are released on christmas day.


And it actually comes from perl, which used to announce their next big release on christmas. but it never came, and they claimed they never said which christmas, just one eventually.


I didn't know that. TIL. :-)


Because ruby was one of the gifts given to the new born king, baby Jesus.


It sure beats frankincense


Not completely sure why I was downvoted, but I'm sorry for any offense I might have (accidentally) caused


Unlike Reddit, jokes are generally not we received on Hacker News. The community prefers relevant, on-point discussion to keep a high signal to noise ratio. This isn't to say some jokes aren't tolerated, but it's best to shy away from them unless you have something really great to add.


We should be able to crack a smile in response to light-hearted comments like that at Christmas :)


See, this is probably why I don't get many upvotes. Joking is pretty much my default response to everything


And mir. Wtf is mir?









Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: